Test Report: KVM_Linux_crio 17866

                    
                      8c6a2e99755a9a0a7d8f4ed404c065becb2fd234:2024-01-08:32612
                    
                

Test fail (29/306)

Order failed test Duration
35 TestAddons/parallel/Ingress 156.46
43 TestAddons/parallel/LocalPath 12.68
49 TestAddons/StoppedEnableDisable 155.5
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.6
213 TestMultiNode/serial/PingHostFrom2Pods 3.25
220 TestMultiNode/serial/RestartKeepsNodes 689.99
222 TestMultiNode/serial/StopMultiNode 143
229 TestPreload 283.18
231 TestScheduledStopUnix 52.55
235 TestRunningBinaryUpgrade 161.04
243 TestStoppedBinaryUpgrade/Upgrade 305.81
289 TestStartStop/group/old-k8s-version/serial/Stop 139.97
293 TestStartStop/group/no-preload/serial/Stop 140.22
295 TestStartStop/group/embed-certs/serial/Stop 139.62
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.06
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
302 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.97
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.82
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.88
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 520.53
311 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.77
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 319.38
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 100.14
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 80.27
320 TestStartStop/group/newest-cni/serial/Stop 140.45
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.44
x
+
TestAddons/parallel/Ingress (156.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-417518 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-417518 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-417518 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9362d43b-5fac-464e-8653-c188bc6b4d90] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9362d43b-5fac-464e-8653-c188bc6b4d90] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004800457s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-417518 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.219043216s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-417518 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.218
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-417518 addons disable ingress-dns --alsologtostderr -v=1: (1.768657688s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-417518 addons disable ingress --alsologtostderr -v=1: (7.901921034s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-417518 -n addons-417518
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-417518 logs -n 25: (1.372541174s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |                     |
	|         | -p download-only-947844                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:02 UTC |
	| delete  | -p download-only-947844                                                                     | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:02 UTC |
	| delete  | -p download-only-947844                                                                     | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-537343 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |                     |
	|         | binary-mirror-537343                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43023                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-537343                                                                     | binary-mirror-537343 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:02 UTC |
	| addons  | disable dashboard -p                                                                        | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |                     |
	|         | addons-417518                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |                     |
	|         | addons-417518                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-417518 --wait=true                                                                | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC | 08 Jan 24 21:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-417518 addons                                                                        | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	|         | addons-417518                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-417518 ssh cat                                                                       | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	|         | /opt/local-path-provisioner/pvc-2fc9d749-712e-4d0c-8caa-1fbe8b09f623_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-417518 addons disable                                                                | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-417518 ip                                                                            | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	| addons  | addons-417518 addons disable                                                                | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	|         | addons-417518                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-417518 ssh curl -s                                                                   | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	|         | -p addons-417518                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	|         | -p addons-417518                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-417518 addons disable                                                                | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:05 UTC | 08 Jan 24 21:05 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-417518 addons                                                                        | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:06 UTC | 08 Jan 24 21:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-417518 addons                                                                        | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:06 UTC | 08 Jan 24 21:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-417518 ip                                                                            | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:07 UTC | 08 Jan 24 21:07 UTC |
	| addons  | addons-417518 addons disable                                                                | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:07 UTC | 08 Jan 24 21:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-417518 addons disable                                                                | addons-417518        | jenkins | v1.32.0 | 08 Jan 24 21:07 UTC | 08 Jan 24 21:07 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:02:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:02:24.702346  342378 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:02:24.702596  342378 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:02:24.702605  342378 out.go:309] Setting ErrFile to fd 2...
	I0108 21:02:24.702610  342378 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:02:24.702788  342378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:02:24.703420  342378 out.go:303] Setting JSON to false
	I0108 21:02:24.704463  342378 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6271,"bootTime":1704741474,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:02:24.704531  342378 start.go:138] virtualization: kvm guest
	I0108 21:02:24.706990  342378 out.go:177] * [addons-417518] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:02:24.708639  342378 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:02:24.708591  342378 notify.go:220] Checking for updates...
	I0108 21:02:24.710070  342378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:02:24.711544  342378 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:02:24.712867  342378 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:02:24.714059  342378 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:02:24.715261  342378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:02:24.716695  342378 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:02:24.749230  342378 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:02:24.750442  342378 start.go:298] selected driver: kvm2
	I0108 21:02:24.750457  342378 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:02:24.750469  342378 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:02:24.751180  342378 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:02:24.751255  342378 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:02:24.766356  342378 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:02:24.766448  342378 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:02:24.766658  342378 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:02:24.766713  342378 cni.go:84] Creating CNI manager for ""
	I0108 21:02:24.766730  342378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:02:24.766741  342378 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:02:24.766752  342378 start_flags.go:321] config:
	{Name:addons-417518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-417518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:02:24.766883  342378 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:02:24.768607  342378 out.go:177] * Starting control plane node addons-417518 in cluster addons-417518
	I0108 21:02:24.769942  342378 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:02:24.769983  342378 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:02:24.770002  342378 cache.go:56] Caching tarball of preloaded images
	I0108 21:02:24.770074  342378 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:02:24.770084  342378 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:02:24.770423  342378 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/config.json ...
	I0108 21:02:24.770448  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/config.json: {Name:mk2e835f47039d06d60b38b488dfcb416091bd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:24.770573  342378 start.go:365] acquiring machines lock for addons-417518: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:02:24.770617  342378 start.go:369] acquired machines lock for "addons-417518" in 31.011µs
	I0108 21:02:24.770634  342378 start.go:93] Provisioning new machine with config: &{Name:addons-417518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-417518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:02:24.770697  342378 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 21:02:24.772470  342378 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0108 21:02:24.772638  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:02:24.772686  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:02:24.786670  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0108 21:02:24.787123  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:02:24.787787  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:02:24.787810  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:02:24.788199  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:02:24.788444  342378 main.go:141] libmachine: (addons-417518) Calling .GetMachineName
	I0108 21:02:24.788584  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:02:24.788751  342378 start.go:159] libmachine.API.Create for "addons-417518" (driver="kvm2")
	I0108 21:02:24.788791  342378 client.go:168] LocalClient.Create starting
	I0108 21:02:24.788841  342378 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 21:02:24.997760  342378 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 21:02:25.264633  342378 main.go:141] libmachine: Running pre-create checks...
	I0108 21:02:25.264661  342378 main.go:141] libmachine: (addons-417518) Calling .PreCreateCheck
	I0108 21:02:25.265221  342378 main.go:141] libmachine: (addons-417518) Calling .GetConfigRaw
	I0108 21:02:25.265744  342378 main.go:141] libmachine: Creating machine...
	I0108 21:02:25.265762  342378 main.go:141] libmachine: (addons-417518) Calling .Create
	I0108 21:02:25.265919  342378 main.go:141] libmachine: (addons-417518) Creating KVM machine...
	I0108 21:02:25.267281  342378 main.go:141] libmachine: (addons-417518) DBG | found existing default KVM network
	I0108 21:02:25.268097  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:25.267927  342400 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147910}
	I0108 21:02:25.273793  342378 main.go:141] libmachine: (addons-417518) DBG | trying to create private KVM network mk-addons-417518 192.168.39.0/24...
	I0108 21:02:25.342164  342378 main.go:141] libmachine: (addons-417518) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518 ...
	I0108 21:02:25.342202  342378 main.go:141] libmachine: (addons-417518) DBG | private KVM network mk-addons-417518 192.168.39.0/24 created
	I0108 21:02:25.342216  342378 main.go:141] libmachine: (addons-417518) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 21:02:25.342321  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:25.342097  342400 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:02:25.342401  342378 main.go:141] libmachine: (addons-417518) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 21:02:25.578709  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:25.578591  342400 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa...
	I0108 21:02:25.805222  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:25.805097  342400 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/addons-417518.rawdisk...
	I0108 21:02:25.805246  342378 main.go:141] libmachine: (addons-417518) DBG | Writing magic tar header
	I0108 21:02:25.805256  342378 main.go:141] libmachine: (addons-417518) DBG | Writing SSH key tar header
	I0108 21:02:25.805265  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:25.805216  342400 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518 ...
	I0108 21:02:25.805300  342378 main.go:141] libmachine: (addons-417518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518
	I0108 21:02:25.805322  342378 main.go:141] libmachine: (addons-417518) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518 (perms=drwx------)
	I0108 21:02:25.805382  342378 main.go:141] libmachine: (addons-417518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 21:02:25.805390  342378 main.go:141] libmachine: (addons-417518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:02:25.805399  342378 main.go:141] libmachine: (addons-417518) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 21:02:25.805411  342378 main.go:141] libmachine: (addons-417518) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 21:02:25.805423  342378 main.go:141] libmachine: (addons-417518) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 21:02:25.805437  342378 main.go:141] libmachine: (addons-417518) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 21:02:25.805453  342378 main.go:141] libmachine: (addons-417518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 21:02:25.805472  342378 main.go:141] libmachine: (addons-417518) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 21:02:25.805488  342378 main.go:141] libmachine: (addons-417518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 21:02:25.805498  342378 main.go:141] libmachine: (addons-417518) Creating domain...
	I0108 21:02:25.805518  342378 main.go:141] libmachine: (addons-417518) DBG | Checking permissions on dir: /home/jenkins
	I0108 21:02:25.805529  342378 main.go:141] libmachine: (addons-417518) DBG | Checking permissions on dir: /home
	I0108 21:02:25.805536  342378 main.go:141] libmachine: (addons-417518) DBG | Skipping /home - not owner
	I0108 21:02:25.806504  342378 main.go:141] libmachine: (addons-417518) define libvirt domain using xml: 
	I0108 21:02:25.806542  342378 main.go:141] libmachine: (addons-417518) <domain type='kvm'>
	I0108 21:02:25.806556  342378 main.go:141] libmachine: (addons-417518)   <name>addons-417518</name>
	I0108 21:02:25.806572  342378 main.go:141] libmachine: (addons-417518)   <memory unit='MiB'>4000</memory>
	I0108 21:02:25.806584  342378 main.go:141] libmachine: (addons-417518)   <vcpu>2</vcpu>
	I0108 21:02:25.806593  342378 main.go:141] libmachine: (addons-417518)   <features>
	I0108 21:02:25.806604  342378 main.go:141] libmachine: (addons-417518)     <acpi/>
	I0108 21:02:25.806619  342378 main.go:141] libmachine: (addons-417518)     <apic/>
	I0108 21:02:25.806629  342378 main.go:141] libmachine: (addons-417518)     <pae/>
	I0108 21:02:25.806638  342378 main.go:141] libmachine: (addons-417518)     
	I0108 21:02:25.806675  342378 main.go:141] libmachine: (addons-417518)   </features>
	I0108 21:02:25.806706  342378 main.go:141] libmachine: (addons-417518)   <cpu mode='host-passthrough'>
	I0108 21:02:25.806721  342378 main.go:141] libmachine: (addons-417518)   
	I0108 21:02:25.806732  342378 main.go:141] libmachine: (addons-417518)   </cpu>
	I0108 21:02:25.806745  342378 main.go:141] libmachine: (addons-417518)   <os>
	I0108 21:02:25.806754  342378 main.go:141] libmachine: (addons-417518)     <type>hvm</type>
	I0108 21:02:25.806764  342378 main.go:141] libmachine: (addons-417518)     <boot dev='cdrom'/>
	I0108 21:02:25.806776  342378 main.go:141] libmachine: (addons-417518)     <boot dev='hd'/>
	I0108 21:02:25.806803  342378 main.go:141] libmachine: (addons-417518)     <bootmenu enable='no'/>
	I0108 21:02:25.806826  342378 main.go:141] libmachine: (addons-417518)   </os>
	I0108 21:02:25.806848  342378 main.go:141] libmachine: (addons-417518)   <devices>
	I0108 21:02:25.806868  342378 main.go:141] libmachine: (addons-417518)     <disk type='file' device='cdrom'>
	I0108 21:02:25.806891  342378 main.go:141] libmachine: (addons-417518)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/boot2docker.iso'/>
	I0108 21:02:25.806904  342378 main.go:141] libmachine: (addons-417518)       <target dev='hdc' bus='scsi'/>
	I0108 21:02:25.806916  342378 main.go:141] libmachine: (addons-417518)       <readonly/>
	I0108 21:02:25.806927  342378 main.go:141] libmachine: (addons-417518)     </disk>
	I0108 21:02:25.806939  342378 main.go:141] libmachine: (addons-417518)     <disk type='file' device='disk'>
	I0108 21:02:25.806964  342378 main.go:141] libmachine: (addons-417518)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 21:02:25.806983  342378 main.go:141] libmachine: (addons-417518)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/addons-417518.rawdisk'/>
	I0108 21:02:25.806996  342378 main.go:141] libmachine: (addons-417518)       <target dev='hda' bus='virtio'/>
	I0108 21:02:25.807010  342378 main.go:141] libmachine: (addons-417518)     </disk>
	I0108 21:02:25.807023  342378 main.go:141] libmachine: (addons-417518)     <interface type='network'>
	I0108 21:02:25.807036  342378 main.go:141] libmachine: (addons-417518)       <source network='mk-addons-417518'/>
	I0108 21:02:25.807051  342378 main.go:141] libmachine: (addons-417518)       <model type='virtio'/>
	I0108 21:02:25.807063  342378 main.go:141] libmachine: (addons-417518)     </interface>
	I0108 21:02:25.807077  342378 main.go:141] libmachine: (addons-417518)     <interface type='network'>
	I0108 21:02:25.807090  342378 main.go:141] libmachine: (addons-417518)       <source network='default'/>
	I0108 21:02:25.807105  342378 main.go:141] libmachine: (addons-417518)       <model type='virtio'/>
	I0108 21:02:25.807121  342378 main.go:141] libmachine: (addons-417518)     </interface>
	I0108 21:02:25.807135  342378 main.go:141] libmachine: (addons-417518)     <serial type='pty'>
	I0108 21:02:25.807152  342378 main.go:141] libmachine: (addons-417518)       <target port='0'/>
	I0108 21:02:25.807166  342378 main.go:141] libmachine: (addons-417518)     </serial>
	I0108 21:02:25.807178  342378 main.go:141] libmachine: (addons-417518)     <console type='pty'>
	I0108 21:02:25.807199  342378 main.go:141] libmachine: (addons-417518)       <target type='serial' port='0'/>
	I0108 21:02:25.807214  342378 main.go:141] libmachine: (addons-417518)     </console>
	I0108 21:02:25.807227  342378 main.go:141] libmachine: (addons-417518)     <rng model='virtio'>
	I0108 21:02:25.807241  342378 main.go:141] libmachine: (addons-417518)       <backend model='random'>/dev/random</backend>
	I0108 21:02:25.807251  342378 main.go:141] libmachine: (addons-417518)     </rng>
	I0108 21:02:25.807261  342378 main.go:141] libmachine: (addons-417518)     
	I0108 21:02:25.807283  342378 main.go:141] libmachine: (addons-417518)     
	I0108 21:02:25.807299  342378 main.go:141] libmachine: (addons-417518)   </devices>
	I0108 21:02:25.807312  342378 main.go:141] libmachine: (addons-417518) </domain>
	I0108 21:02:25.807323  342378 main.go:141] libmachine: (addons-417518) 
	I0108 21:02:25.813770  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:24:ce:91 in network default
	I0108 21:02:25.814242  342378 main.go:141] libmachine: (addons-417518) Ensuring networks are active...
	I0108 21:02:25.814262  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:25.814906  342378 main.go:141] libmachine: (addons-417518) Ensuring network default is active
	I0108 21:02:25.815175  342378 main.go:141] libmachine: (addons-417518) Ensuring network mk-addons-417518 is active
	I0108 21:02:25.815661  342378 main.go:141] libmachine: (addons-417518) Getting domain xml...
	I0108 21:02:25.816251  342378 main.go:141] libmachine: (addons-417518) Creating domain...
	I0108 21:02:27.085255  342378 main.go:141] libmachine: (addons-417518) Waiting to get IP...
	I0108 21:02:27.086088  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:27.086523  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:27.086548  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:27.086492  342400 retry.go:31] will retry after 234.206141ms: waiting for machine to come up
	I0108 21:02:27.322080  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:27.322413  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:27.322445  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:27.322378  342400 retry.go:31] will retry after 372.737621ms: waiting for machine to come up
	I0108 21:02:27.697104  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:27.697559  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:27.697591  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:27.697505  342400 retry.go:31] will retry after 350.291107ms: waiting for machine to come up
	I0108 21:02:28.048985  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:28.049548  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:28.049588  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:28.049493  342400 retry.go:31] will retry after 456.088231ms: waiting for machine to come up
	I0108 21:02:28.507189  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:28.507761  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:28.507795  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:28.507659  342400 retry.go:31] will retry after 692.213679ms: waiting for machine to come up
	I0108 21:02:29.201634  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:29.202013  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:29.202064  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:29.201956  342400 retry.go:31] will retry after 833.950046ms: waiting for machine to come up
	I0108 21:02:30.037660  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:30.038046  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:30.038080  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:30.037989  342400 retry.go:31] will retry after 859.045776ms: waiting for machine to come up
	I0108 21:02:30.899082  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:30.899498  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:30.899532  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:30.899421  342400 retry.go:31] will retry after 1.171356661s: waiting for machine to come up
	I0108 21:02:32.072628  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:32.072995  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:32.073028  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:32.072942  342400 retry.go:31] will retry after 1.182285842s: waiting for machine to come up
	I0108 21:02:33.256683  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:33.257139  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:33.257185  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:33.257095  342400 retry.go:31] will retry after 1.726352618s: waiting for machine to come up
	I0108 21:02:34.984986  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:34.985466  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:34.985499  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:34.985413  342400 retry.go:31] will retry after 2.797645255s: waiting for machine to come up
	I0108 21:02:37.786628  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:37.787214  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:37.787244  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:37.787161  342400 retry.go:31] will retry after 2.487724979s: waiting for machine to come up
	I0108 21:02:40.276067  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:40.276448  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:40.276474  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:40.276418  342400 retry.go:31] will retry after 3.073639661s: waiting for machine to come up
	I0108 21:02:43.353747  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:43.354165  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find current IP address of domain addons-417518 in network mk-addons-417518
	I0108 21:02:43.354199  342378 main.go:141] libmachine: (addons-417518) DBG | I0108 21:02:43.354107  342400 retry.go:31] will retry after 3.865866783s: waiting for machine to come up
	I0108 21:02:47.222955  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.223340  342378 main.go:141] libmachine: (addons-417518) Found IP for machine: 192.168.39.218
	I0108 21:02:47.223370  342378 main.go:141] libmachine: (addons-417518) Reserving static IP address...
	I0108 21:02:47.223393  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has current primary IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.223749  342378 main.go:141] libmachine: (addons-417518) DBG | unable to find host DHCP lease matching {name: "addons-417518", mac: "52:54:00:96:c6:e8", ip: "192.168.39.218"} in network mk-addons-417518
	I0108 21:02:47.357424  342378 main.go:141] libmachine: (addons-417518) Reserved static IP address: 192.168.39.218
	I0108 21:02:47.357467  342378 main.go:141] libmachine: (addons-417518) Waiting for SSH to be available...
	I0108 21:02:47.357479  342378 main.go:141] libmachine: (addons-417518) DBG | Getting to WaitForSSH function...
	I0108 21:02:47.360191  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.360665  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:47.360703  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.360735  342378 main.go:141] libmachine: (addons-417518) DBG | Using SSH client type: external
	I0108 21:02:47.360752  342378 main.go:141] libmachine: (addons-417518) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa (-rw-------)
	I0108 21:02:47.360785  342378 main.go:141] libmachine: (addons-417518) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:02:47.360815  342378 main.go:141] libmachine: (addons-417518) DBG | About to run SSH command:
	I0108 21:02:47.360829  342378 main.go:141] libmachine: (addons-417518) DBG | exit 0
	I0108 21:02:47.467523  342378 main.go:141] libmachine: (addons-417518) DBG | SSH cmd err, output: <nil>: 
	I0108 21:02:47.467825  342378 main.go:141] libmachine: (addons-417518) KVM machine creation complete!
	I0108 21:02:47.468111  342378 main.go:141] libmachine: (addons-417518) Calling .GetConfigRaw
	I0108 21:02:47.470176  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:02:47.470418  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:02:47.470587  342378 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 21:02:47.470608  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:02:47.471984  342378 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 21:02:47.472002  342378 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 21:02:47.472008  342378 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 21:02:47.472015  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:47.474342  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.474700  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:47.474726  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.474900  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:47.475086  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:47.475249  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:47.475418  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:47.475608  342378 main.go:141] libmachine: Using SSH client type: native
	I0108 21:02:47.475946  342378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0108 21:02:47.475960  342378 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 21:02:47.606687  342378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:02:47.606724  342378 main.go:141] libmachine: Detecting the provisioner...
	I0108 21:02:47.606738  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:47.609547  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.609912  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:47.609945  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.610107  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:47.610310  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:47.610495  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:47.610668  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:47.610840  342378 main.go:141] libmachine: Using SSH client type: native
	I0108 21:02:47.611189  342378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0108 21:02:47.611204  342378 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 21:02:47.744145  342378 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 21:02:47.744275  342378 main.go:141] libmachine: found compatible host: buildroot
	I0108 21:02:47.744290  342378 main.go:141] libmachine: Provisioning with buildroot...
	I0108 21:02:47.744300  342378 main.go:141] libmachine: (addons-417518) Calling .GetMachineName
	I0108 21:02:47.744565  342378 buildroot.go:166] provisioning hostname "addons-417518"
	I0108 21:02:47.744596  342378 main.go:141] libmachine: (addons-417518) Calling .GetMachineName
	I0108 21:02:47.744865  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:47.747288  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.747662  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:47.747694  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.747824  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:47.748013  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:47.748216  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:47.748343  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:47.748462  342378 main.go:141] libmachine: Using SSH client type: native
	I0108 21:02:47.748797  342378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0108 21:02:47.748811  342378 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-417518 && echo "addons-417518" | sudo tee /etc/hostname
	I0108 21:02:47.897854  342378 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-417518
	
	I0108 21:02:47.897894  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:47.901023  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.901482  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:47.901519  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:47.901749  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:47.901953  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:47.902100  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:47.902233  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:47.902386  342378 main.go:141] libmachine: Using SSH client type: native
	I0108 21:02:47.902712  342378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0108 21:02:47.902729  342378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-417518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-417518/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-417518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:02:48.045489  342378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:02:48.045544  342378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 21:02:48.045567  342378 buildroot.go:174] setting up certificates
	I0108 21:02:48.045593  342378 provision.go:83] configureAuth start
	I0108 21:02:48.045608  342378 main.go:141] libmachine: (addons-417518) Calling .GetMachineName
	I0108 21:02:48.045912  342378 main.go:141] libmachine: (addons-417518) Calling .GetIP
	I0108 21:02:48.048595  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.049001  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.049034  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.049144  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:48.051418  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.051734  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.051786  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.051919  342378 provision.go:138] copyHostCerts
	I0108 21:02:48.052013  342378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 21:02:48.052166  342378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 21:02:48.052271  342378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 21:02:48.052355  342378 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.addons-417518 san=[192.168.39.218 192.168.39.218 localhost 127.0.0.1 minikube addons-417518]
	I0108 21:02:48.102418  342378 provision.go:172] copyRemoteCerts
	I0108 21:02:48.102496  342378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:02:48.102572  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:48.105166  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.105484  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.105520  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.105662  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:48.105865  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:48.106019  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:48.106170  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:02:48.200890  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:02:48.233905  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 21:02:48.259323  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:02:48.282110  342378 provision.go:86] duration metric: configureAuth took 236.502248ms
	I0108 21:02:48.282143  342378 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:02:48.282402  342378 config.go:182] Loaded profile config "addons-417518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:02:48.282526  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:48.284980  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.285382  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.285433  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.285659  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:48.285844  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:48.285986  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:48.286099  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:48.286331  342378 main.go:141] libmachine: Using SSH client type: native
	I0108 21:02:48.286708  342378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0108 21:02:48.286728  342378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:02:48.608970  342378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:02:48.609002  342378 main.go:141] libmachine: Checking connection to Docker...
	I0108 21:02:48.609036  342378 main.go:141] libmachine: (addons-417518) Calling .GetURL
	I0108 21:02:48.610613  342378 main.go:141] libmachine: (addons-417518) DBG | Using libvirt version 6000000
	I0108 21:02:48.612701  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.613040  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.613075  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.613212  342378 main.go:141] libmachine: Docker is up and running!
	I0108 21:02:48.613231  342378 main.go:141] libmachine: Reticulating splines...
	I0108 21:02:48.613239  342378 client.go:171] LocalClient.Create took 23.824437234s
	I0108 21:02:48.613263  342378 start.go:167] duration metric: libmachine.API.Create for "addons-417518" took 23.824514122s
	I0108 21:02:48.613273  342378 start.go:300] post-start starting for "addons-417518" (driver="kvm2")
	I0108 21:02:48.613291  342378 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:02:48.613309  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:02:48.613601  342378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:02:48.613624  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:48.615988  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.616307  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.616344  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.616470  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:48.616668  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:48.616955  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:48.617119  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:02:48.713809  342378 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:02:48.717920  342378 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:02:48.717947  342378 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 21:02:48.718013  342378 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 21:02:48.718051  342378 start.go:303] post-start completed in 104.771214ms
	I0108 21:02:48.718093  342378 main.go:141] libmachine: (addons-417518) Calling .GetConfigRaw
	I0108 21:02:48.718643  342378 main.go:141] libmachine: (addons-417518) Calling .GetIP
	I0108 21:02:48.721175  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.721573  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.721601  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.721828  342378 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/config.json ...
	I0108 21:02:48.721983  342378 start.go:128] duration metric: createHost completed in 23.951275293s
	I0108 21:02:48.722008  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:48.723998  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.724371  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.724392  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.724485  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:48.724692  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:48.724868  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:48.725009  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:48.725167  342378 main.go:141] libmachine: Using SSH client type: native
	I0108 21:02:48.725494  342378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0108 21:02:48.725508  342378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:02:48.856019  342378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704747768.833011351
	
	I0108 21:02:48.856050  342378 fix.go:206] guest clock: 1704747768.833011351
	I0108 21:02:48.856062  342378 fix.go:219] Guest: 2024-01-08 21:02:48.833011351 +0000 UTC Remote: 2024-01-08 21:02:48.721993707 +0000 UTC m=+24.070294543 (delta=111.017644ms)
	I0108 21:02:48.856111  342378 fix.go:190] guest clock delta is within tolerance: 111.017644ms
	I0108 21:02:48.856118  342378 start.go:83] releasing machines lock for "addons-417518", held for 24.085490793s
	I0108 21:02:48.856149  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:02:48.856449  342378 main.go:141] libmachine: (addons-417518) Calling .GetIP
	I0108 21:02:48.858910  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.859206  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.859242  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.859395  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:02:48.859878  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:02:48.860083  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:02:48.860213  342378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:02:48.860254  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:48.860347  342378 ssh_runner.go:195] Run: cat /version.json
	I0108 21:02:48.860378  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:02:48.863235  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.863743  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.863771  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.863809  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.863966  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:48.864181  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:48.864292  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:48.864320  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:48.864326  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:48.864512  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:02:48.864503  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:02:48.864660  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:02:48.864805  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:02:48.864918  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:02:48.980763  342378 ssh_runner.go:195] Run: systemctl --version
	I0108 21:02:48.985949  342378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:02:49.141473  342378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:02:49.148371  342378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:02:49.148473  342378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:02:49.162818  342378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:02:49.162840  342378 start.go:475] detecting cgroup driver to use...
	I0108 21:02:49.162939  342378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:02:49.179033  342378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:02:49.190667  342378 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:02:49.190728  342378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:02:49.202323  342378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:02:49.214046  342378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:02:49.316378  342378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:02:49.437481  342378 docker.go:219] disabling docker service ...
	I0108 21:02:49.437584  342378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:02:49.451215  342378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:02:49.462981  342378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:02:49.569574  342378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:02:49.674134  342378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:02:49.687448  342378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:02:49.705101  342378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:02:49.705189  342378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:02:49.715246  342378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:02:49.715318  342378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:02:49.725235  342378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:02:49.734929  342378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:02:49.744687  342378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:02:49.754700  342378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:02:49.763501  342378 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:02:49.763553  342378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 21:02:49.776041  342378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:02:49.785108  342378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:02:49.889878  342378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:02:50.045388  342378 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:02:50.045484  342378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:02:50.050297  342378 start.go:543] Will wait 60s for crictl version
	I0108 21:02:50.050379  342378 ssh_runner.go:195] Run: which crictl
	I0108 21:02:50.053973  342378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:02:50.094333  342378 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:02:50.094430  342378 ssh_runner.go:195] Run: crio --version
	I0108 21:02:50.145765  342378 ssh_runner.go:195] Run: crio --version
	I0108 21:02:50.201646  342378 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:02:50.203199  342378 main.go:141] libmachine: (addons-417518) Calling .GetIP
	I0108 21:02:50.205805  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:50.206148  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:02:50.206182  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:02:50.206391  342378 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:02:50.210463  342378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:02:50.222971  342378 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:02:50.223038  342378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:02:50.256131  342378 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 21:02:50.256214  342378 ssh_runner.go:195] Run: which lz4
	I0108 21:02:50.259895  342378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:02:50.263822  342378 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:02:50.263863  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 21:02:52.014667  342378 crio.go:444] Took 1.754798 seconds to copy over tarball
	I0108 21:02:52.014755  342378 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:02:55.066699  342378 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.051917193s)
	I0108 21:02:55.066730  342378 crio.go:451] Took 3.052031 seconds to extract the tarball
	I0108 21:02:55.066740  342378 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:02:55.107305  342378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:02:55.171769  342378 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:02:55.171804  342378 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:02:55.171883  342378 ssh_runner.go:195] Run: crio config
	I0108 21:02:55.235310  342378 cni.go:84] Creating CNI manager for ""
	I0108 21:02:55.235335  342378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:02:55.235371  342378 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:02:55.235396  342378 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.218 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-417518 NodeName:addons-417518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:02:55.235542  342378 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-417518"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:02:55.235632  342378 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-417518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-417518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:02:55.235696  342378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:02:55.244505  342378 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:02:55.244582  342378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:02:55.253564  342378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0108 21:02:55.270280  342378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:02:55.286428  342378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0108 21:02:55.301930  342378 ssh_runner.go:195] Run: grep 192.168.39.218	control-plane.minikube.internal$ /etc/hosts
	I0108 21:02:55.305579  342378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:02:55.316973  342378 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518 for IP: 192.168.39.218
	I0108 21:02:55.317015  342378 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:55.317158  342378 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 21:02:55.454987  342378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt ...
	I0108 21:02:55.455022  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt: {Name:mk3042cae0353431ef9ccab1aeab2e63b9ace33f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:55.455216  342378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key ...
	I0108 21:02:55.455232  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key: {Name:mk23039ce105f8ad5dbcaa2b7bf6d54727727e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:55.455333  342378 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 21:02:55.503079  342378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt ...
	I0108 21:02:55.503116  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt: {Name:mk76954aa0a1505da26b6121e1d2de4707f63698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:55.503308  342378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key ...
	I0108 21:02:55.503325  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key: {Name:mk55de870c49e235db701a74bf62b0012453a7ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:55.503499  342378 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.key
	I0108 21:02:55.503518  342378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt with IP's: []
	I0108 21:02:55.974077  342378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt ...
	I0108 21:02:55.974131  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: {Name:mkaa74384038821ea7f27c0f788035d455e9fe4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:55.974313  342378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.key ...
	I0108 21:02:55.974324  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.key: {Name:mkd9b19aab5f1328ad64ba8be39c98ff1ec9109a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:55.974397  342378 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.key.ac8915a9
	I0108 21:02:55.974415  342378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.crt.ac8915a9 with IP's: [192.168.39.218 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:02:56.113919  342378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.crt.ac8915a9 ...
	I0108 21:02:56.113954  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.crt.ac8915a9: {Name:mk6e3f8d77d520ab95c13a0a8734a251f1619d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:56.114124  342378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.key.ac8915a9 ...
	I0108 21:02:56.114139  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.key.ac8915a9: {Name:mk65b17c6923d42b23f2909e886d7c98c1ef07af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:56.114207  342378 certs.go:337] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.crt.ac8915a9 -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.crt
	I0108 21:02:56.114275  342378 certs.go:341] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.key.ac8915a9 -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.key
	I0108 21:02:56.114320  342378 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/proxy-client.key
	I0108 21:02:56.114337  342378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/proxy-client.crt with IP's: []
	I0108 21:02:56.303606  342378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/proxy-client.crt ...
	I0108 21:02:56.303648  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/proxy-client.crt: {Name:mk7d5b66a3375d36d58ff64aa37a428cebf42d36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:56.303807  342378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/proxy-client.key ...
	I0108 21:02:56.303826  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/proxy-client.key: {Name:mk21a167e5e005a51bbde2b270ccd1e38dcab5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:02:56.304046  342378 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:02:56.304087  342378 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:02:56.304112  342378 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:02:56.304137  342378 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 21:02:56.304726  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:02:56.327851  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:02:56.350050  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:02:56.374508  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:02:56.397182  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:02:56.418323  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:02:56.439785  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:02:56.461504  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:02:56.483486  342378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:02:56.506315  342378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:02:56.522800  342378 ssh_runner.go:195] Run: openssl version
	I0108 21:02:56.528635  342378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:02:56.539851  342378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:02:56.544682  342378 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:02:56.544751  342378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:02:56.550694  342378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:02:56.561088  342378 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:02:56.565200  342378 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:02:56.565262  342378 kubeadm.go:404] StartCluster: {Name:addons-417518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-417518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:02:56.565375  342378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:02:56.565431  342378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:02:56.603869  342378 cri.go:89] found id: ""
	I0108 21:02:56.603968  342378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:02:56.613932  342378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:02:56.623479  342378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:02:56.633369  342378 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:02:56.633431  342378 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 21:02:56.691287  342378 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 21:02:56.691404  342378 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:02:56.834686  342378 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:02:56.834833  342378 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:02:56.834928  342378 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:02:57.079335  342378 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:02:57.242884  342378 out.go:204]   - Generating certificates and keys ...
	I0108 21:02:57.243026  342378 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:02:57.243095  342378 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:02:57.307552  342378 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:02:57.606382  342378 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:02:57.707720  342378 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:02:57.829021  342378 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:02:58.195809  342378 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:02:58.195973  342378 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-417518 localhost] and IPs [192.168.39.218 127.0.0.1 ::1]
	I0108 21:02:58.274238  342378 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:02:58.274419  342378 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-417518 localhost] and IPs [192.168.39.218 127.0.0.1 ::1]
	I0108 21:02:58.371771  342378 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:02:58.622901  342378 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:02:58.715740  342378 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:02:58.715979  342378 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:02:59.069333  342378 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:02:59.158109  342378 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:02:59.419592  342378 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:02:59.616400  342378 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:02:59.616996  342378 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:02:59.622738  342378 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:02:59.709506  342378 out.go:204]   - Booting up control plane ...
	I0108 21:02:59.709704  342378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:02:59.755271  342378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:02:59.755408  342378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:02:59.755562  342378 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:02:59.755689  342378 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:02:59.755739  342378 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:02:59.772255  342378 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:03:07.273781  342378 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503659 seconds
	I0108 21:03:07.273948  342378 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:03:07.295211  342378 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:03:07.829748  342378 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:03:07.829958  342378 kubeadm.go:322] [mark-control-plane] Marking the node addons-417518 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:03:08.346243  342378 kubeadm.go:322] [bootstrap-token] Using token: tosrij.4js3gzzh3bshey0r
	I0108 21:03:08.347799  342378 out.go:204]   - Configuring RBAC rules ...
	I0108 21:03:08.347908  342378 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:03:08.353978  342378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:03:08.365504  342378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:03:08.369339  342378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:03:08.373664  342378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:03:08.377105  342378 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:03:08.392075  342378 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:03:08.604097  342378 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:03:08.759475  342378 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:03:08.760330  342378 kubeadm.go:322] 
	I0108 21:03:08.760419  342378 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:03:08.760430  342378 kubeadm.go:322] 
	I0108 21:03:08.760535  342378 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:03:08.760546  342378 kubeadm.go:322] 
	I0108 21:03:08.760590  342378 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:03:08.760686  342378 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:03:08.760767  342378 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:03:08.760780  342378 kubeadm.go:322] 
	I0108 21:03:08.760850  342378 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 21:03:08.760862  342378 kubeadm.go:322] 
	I0108 21:03:08.760924  342378 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:03:08.760932  342378 kubeadm.go:322] 
	I0108 21:03:08.761001  342378 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:03:08.761096  342378 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:03:08.761184  342378 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:03:08.761204  342378 kubeadm.go:322] 
	I0108 21:03:08.761332  342378 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:03:08.761441  342378 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:03:08.761456  342378 kubeadm.go:322] 
	I0108 21:03:08.761572  342378 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tosrij.4js3gzzh3bshey0r \
	I0108 21:03:08.761716  342378 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 21:03:08.761753  342378 kubeadm.go:322] 	--control-plane 
	I0108 21:03:08.761772  342378 kubeadm.go:322] 
	I0108 21:03:08.761895  342378 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:03:08.761908  342378 kubeadm.go:322] 
	I0108 21:03:08.762024  342378 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tosrij.4js3gzzh3bshey0r \
	I0108 21:03:08.762141  342378 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 21:03:08.762424  342378 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:03:08.762444  342378 cni.go:84] Creating CNI manager for ""
	I0108 21:03:08.762459  342378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:03:08.765568  342378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:03:08.766935  342378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:03:08.802596  342378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:03:08.844334  342378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:03:08.844398  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:08.844406  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=addons-417518 minikube.k8s.io/updated_at=2024_01_08T21_03_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:08.927588  342378 ops.go:34] apiserver oom_adj: -16
	I0108 21:03:09.090873  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:09.590897  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:10.091368  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:10.591596  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:11.091873  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:11.591074  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:12.091012  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:12.591268  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:13.091089  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:13.591630  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:14.091021  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:14.590952  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:15.091149  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:15.591052  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:16.091561  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:16.590990  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:17.091642  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:17.591711  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:18.091130  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:18.591102  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:19.091208  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:19.591906  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:20.091345  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:20.591751  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:21.091290  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:21.591535  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:22.091070  342378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:03:22.196146  342378 kubeadm.go:1088] duration metric: took 13.3518054s to wait for elevateKubeSystemPrivileges.
	I0108 21:03:22.196192  342378 kubeadm.go:406] StartCluster complete in 25.630935165s
	I0108 21:03:22.196218  342378 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:03:22.196362  342378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:03:22.196798  342378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:03:22.197090  342378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:03:22.197203  342378 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 21:03:22.197334  342378 config.go:182] Loaded profile config "addons-417518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:03:22.197355  342378 addons.go:69] Setting yakd=true in profile "addons-417518"
	I0108 21:03:22.197368  342378 addons.go:69] Setting cloud-spanner=true in profile "addons-417518"
	I0108 21:03:22.197383  342378 addons.go:237] Setting addon yakd=true in "addons-417518"
	I0108 21:03:22.197394  342378 addons.go:237] Setting addon cloud-spanner=true in "addons-417518"
	I0108 21:03:22.197394  342378 addons.go:69] Setting metrics-server=true in profile "addons-417518"
	I0108 21:03:22.197411  342378 addons.go:237] Setting addon metrics-server=true in "addons-417518"
	I0108 21:03:22.197410  342378 addons.go:69] Setting inspektor-gadget=true in profile "addons-417518"
	I0108 21:03:22.197417  342378 addons.go:69] Setting storage-provisioner=true in profile "addons-417518"
	I0108 21:03:22.197432  342378 addons.go:237] Setting addon inspektor-gadget=true in "addons-417518"
	I0108 21:03:22.197444  342378 addons.go:237] Setting addon storage-provisioner=true in "addons-417518"
	I0108 21:03:22.197442  342378 addons.go:69] Setting ingress-dns=true in profile "addons-417518"
	I0108 21:03:22.197472  342378 addons.go:237] Setting addon ingress-dns=true in "addons-417518"
	I0108 21:03:22.197476  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.197480  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.197480  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.197490  342378 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-417518"
	I0108 21:03:22.197500  342378 addons.go:69] Setting volumesnapshots=true in profile "addons-417518"
	I0108 21:03:22.197511  342378 addons.go:237] Setting addon volumesnapshots=true in "addons-417518"
	I0108 21:03:22.197522  342378 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-417518"
	I0108 21:03:22.197527  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.197539  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.197553  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.197626  342378 addons.go:69] Setting registry=true in profile "addons-417518"
	I0108 21:03:22.197639  342378 addons.go:237] Setting addon registry=true in "addons-417518"
	I0108 21:03:22.197667  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.197704  342378 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-417518"
	I0108 21:03:22.197719  342378 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-417518"
	I0108 21:03:22.197753  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.198111  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198121  342378 addons.go:69] Setting default-storageclass=true in profile "addons-417518"
	I0108 21:03:22.198127  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198135  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198145  342378 addons.go:69] Setting gcp-auth=true in profile "addons-417518"
	I0108 21:03:22.198150  342378 addons.go:69] Setting helm-tiller=true in profile "addons-417518"
	I0108 21:03:22.198164  342378 mustload.go:65] Loading cluster: addons-417518
	I0108 21:03:22.198168  342378 addons.go:237] Setting addon helm-tiller=true in "addons-417518"
	I0108 21:03:22.198176  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198176  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198185  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198201  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.198208  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.197481  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.198140  342378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-417518"
	I0108 21:03:22.197491  342378 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-417518"
	I0108 21:03:22.198278  342378 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-417518"
	I0108 21:03:22.198111  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198302  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.197357  342378 addons.go:69] Setting ingress=true in profile "addons-417518"
	I0108 21:03:22.198334  342378 config.go:182] Loaded profile config "addons-417518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:03:22.198343  342378 addons.go:237] Setting addon ingress=true in "addons-417518"
	I0108 21:03:22.197483  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.198111  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198392  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198111  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198451  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198142  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198638  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198679  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.198690  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198639  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198843  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198659  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.199018  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.199024  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.199055  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198682  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.199114  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.199024  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.199181  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.198666  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.198662  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.199410  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.199418  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.217871  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44019
	I0108 21:03:22.218057  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0108 21:03:22.218481  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.218634  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.218997  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.219067  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.219244  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.219260  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.219419  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.219580  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.219644  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.220242  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
	I0108 21:03:22.220450  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I0108 21:03:22.221004  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.221181  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.221544  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.221585  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.221727  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.221765  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.221861  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I0108 21:03:22.222284  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.222409  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.222484  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.222909  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.222979  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.223479  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.224039  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.224107  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.224932  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.224996  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.235517  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45393
	I0108 21:03:22.235663  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.235687  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.236189  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.236227  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.236945  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42781
	I0108 21:03:22.237105  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.237167  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0108 21:03:22.237648  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.237689  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.244145  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.244262  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.244826  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.245024  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.245039  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.245160  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.245173  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.245382  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.245402  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.245851  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.245925  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.245967  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.246397  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.246432  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.254427  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I0108 21:03:22.255120  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.255178  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.255450  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.255978  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.255992  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.256014  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.256044  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.256518  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.256727  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.257749  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0108 21:03:22.259092  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45197
	I0108 21:03:22.259213  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.259807  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.259834  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.260194  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.260430  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.261613  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0108 21:03:22.262148  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.262687  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.262735  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.262764  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.264610  342378 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 21:03:22.263085  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.263546  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.267959  342378 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 21:03:22.266882  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.267087  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.269254  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.269332  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.269380  342378 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 21:03:22.269396  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 21:03:22.269415  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.269697  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36973
	I0108 21:03:22.270080  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.270573  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.270598  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.271116  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.271306  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.273510  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.273656  342378 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-417518"
	I0108 21:03:22.273702  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.273917  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.273942  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.274123  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.274164  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.274241  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.274319  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.274404  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.274504  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.274913  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.274944  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.275148  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0108 21:03:22.275164  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.275719  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.276195  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.276215  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.276639  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.276860  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.276914  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45503
	I0108 21:03:22.277343  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.277805  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.277823  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.278166  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.278730  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.278766  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.279484  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I0108 21:03:22.279838  342378 addons.go:237] Setting addon default-storageclass=true in "addons-417518"
	I0108 21:03:22.279874  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:22.280176  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0108 21:03:22.280247  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.280276  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.280581  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.281000  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.281018  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.281362  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.281888  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.281920  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.284816  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0108 21:03:22.285236  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.285896  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.285919  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.286028  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.286417  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.286616  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.286633  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.286696  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0108 21:03:22.287152  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.287181  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.287747  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.287962  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.289576  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.290096  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.292360  342378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:03:22.291001  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.296463  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.296650  342378 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:03:22.296669  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:03:22.296687  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.297114  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.297294  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.298583  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I0108 21:03:22.299262  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.299865  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.299885  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.300424  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.300481  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.302217  342378 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 21:03:22.300850  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I0108 21:03:22.301061  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.301227  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.301869  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.302993  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45335
	I0108 21:03:22.303698  342378 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 21:03:22.303713  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 21:03:22.303732  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.303784  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.303808  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.304373  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.304459  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.304527  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.305184  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.305204  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.305338  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.305349  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.305400  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.305874  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.305937  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.305985  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.306499  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.306543  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.306706  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.307738  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.307763  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.308206  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.308403  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.308681  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.308889  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.309224  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.311411  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 21:03:22.310013  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.310041  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.313833  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0108 21:03:22.314601  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0108 21:03:22.315108  342378 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 21:03:22.315126  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 21:03:22.315147  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.317209  342378 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 21:03:22.316074  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.316732  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.317494  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0108 21:03:22.317993  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44125
	I0108 21:03:22.318549  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I0108 21:03:22.318615  342378 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 21:03:22.319134  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.319862  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.319906  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0108 21:03:22.319920  342378 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 21:03:22.321590  342378 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:03:22.321604  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:03:22.321616  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.320297  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.319481  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0108 21:03:22.320345  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 21:03:22.321777  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.320363  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.319158  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.321849  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.321873  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.320620  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.320664  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.320799  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.320810  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.321037  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.321956  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.322572  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.322590  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.322589  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.322608  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.322667  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.322739  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.322757  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.322771  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.322840  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.322857  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.322866  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.322871  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.323445  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.323443  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.323510  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.323527  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.323561  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.323575  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.323622  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.323819  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.323861  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.323877  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.323898  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.324473  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.324504  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.324692  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.325323  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.325600  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.326565  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.326993  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.327217  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.327294  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.327309  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.328799  342378 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 21:03:22.327529  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.328655  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0108 21:03:22.328781  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.328946  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.329346  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.329462  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.329518  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.329852  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.330103  342378 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 21:03:22.330468  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.331213  342378 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 21:03:22.331349  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 21:03:22.332598  342378 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 21:03:22.332608  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.332613  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 21:03:22.332630  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.331398  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.332667  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.331896  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.331909  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.333954  342378 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 21:03:22.332181  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.332846  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.332879  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.335256  342378 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 21:03:22.335272  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 21:03:22.335290  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.335300  342378 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 21:03:22.336591  342378 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 21:03:22.336609  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 21:03:22.336625  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.335294  342378 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 21:03:22.337820  342378 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 21:03:22.336707  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.335475  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.336058  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.335401  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 21:03:22.336942  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.338836  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.340268  342378 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 21:03:22.339194  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.339494  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.340309  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.341484  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.339751  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.340038  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.340041  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.340190  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.339529  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.340480  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.340812  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:22.340873  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.341571  342378 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 21:03:22.342741  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 21:03:22.342759  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.342812  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 21:03:22.342947  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.344073  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 21:03:22.342978  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.343021  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.343051  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:22.343124  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.343205  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.343245  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.343542  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.344099  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.344635  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45839
	I0108 21:03:22.345441  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.345909  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.348930  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 21:03:22.347620  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.347776  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.347899  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.347929  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.347924  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.347944  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.347939  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.348210  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.351796  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 21:03:22.350275  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.350457  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.350513  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.350575  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.350758  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.350826  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.352859  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 21:03:22.354079  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 21:03:22.352872  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.353695  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.356434  342378 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 21:03:22.357604  342378 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 21:03:22.357617  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 21:03:22.357635  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.355653  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.357868  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.360013  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.360238  342378 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:03:22.360256  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:03:22.360272  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.361881  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.362398  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.362417  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.362562  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.362695  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.362785  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.362858  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.363642  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.364008  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.364021  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.364233  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.364349  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.364477  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.364560  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	W0108 21:03:22.365351  342378 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0108 21:03:22.365371  342378 retry.go:31] will retry after 179.066188ms: ssh: handshake failed: EOF
	I0108 21:03:22.368285  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0108 21:03:22.368694  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:22.369141  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:22.369167  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:22.369466  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:22.369690  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:22.371136  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:22.372800  342378 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 21:03:22.373976  342378 out.go:177]   - Using image docker.io/busybox:stable
	I0108 21:03:22.375179  342378 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 21:03:22.375195  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 21:03:22.375209  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:22.378321  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.378732  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:22.378757  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:22.378956  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:22.379120  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:22.379284  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:22.379408  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:22.533817  342378 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 21:03:22.533839  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 21:03:22.571377  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 21:03:22.574559  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:03:22.590498  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 21:03:22.637780  342378 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 21:03:22.637801  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 21:03:22.645454  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 21:03:22.671924  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 21:03:22.689784  342378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:03:22.747606  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 21:03:22.755084  342378 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 21:03:22.755109  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 21:03:22.794959  342378 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:03:22.794985  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 21:03:22.800818  342378 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 21:03:22.800845  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 21:03:22.822884  342378 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 21:03:22.822908  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 21:03:22.844859  342378 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 21:03:22.844883  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 21:03:22.849756  342378 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-417518" context rescaled to 1 replicas
	I0108 21:03:22.849796  342378 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:03:22.851630  342378 out.go:177] * Verifying Kubernetes components...
	I0108 21:03:22.853043  342378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:03:22.869567  342378 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 21:03:22.869611  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 21:03:22.882002  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 21:03:22.999447  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:03:23.088555  342378 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 21:03:23.088581  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 21:03:23.136293  342378 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 21:03:23.136328  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 21:03:23.142430  342378 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:03:23.142452  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:03:23.158305  342378 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 21:03:23.158332  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 21:03:23.172924  342378 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 21:03:23.172949  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 21:03:23.180187  342378 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 21:03:23.180214  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 21:03:23.261561  342378 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 21:03:23.261588  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 21:03:23.337276  342378 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:03:23.337301  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:03:23.352597  342378 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 21:03:23.352624  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 21:03:23.357842  342378 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 21:03:23.357870  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 21:03:23.358956  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 21:03:23.371475  342378 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 21:03:23.371503  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 21:03:23.419099  342378 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 21:03:23.419123  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 21:03:23.562812  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:03:23.578904  342378 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 21:03:23.578925  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 21:03:23.598617  342378 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 21:03:23.598646  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 21:03:23.608858  342378 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 21:03:23.608885  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 21:03:23.612104  342378 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 21:03:23.612131  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 21:03:23.641304  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 21:03:23.752676  342378 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 21:03:23.752706  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 21:03:23.760919  342378 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 21:03:23.760940  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 21:03:23.761455  342378 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 21:03:23.761471  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 21:03:23.821823  342378 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 21:03:23.821849  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 21:03:23.834719  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 21:03:23.848855  342378 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 21:03:23.848880  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 21:03:23.883338  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 21:03:23.911438  342378 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 21:03:23.911467  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 21:03:23.963026  342378 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 21:03:23.963081  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 21:03:24.027303  342378 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 21:03:24.027328  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 21:03:24.087924  342378 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 21:03:24.087950  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 21:03:24.129010  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 21:03:26.637235  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.065811662s)
	I0108 21:03:26.637315  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:26.637330  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:26.637817  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:26.637827  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:26.637851  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:26.637868  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:26.637883  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:26.638155  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:26.638224  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:26.638184  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:29.812967  342378 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 21:03:29.813017  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:29.816219  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:29.816719  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:29.816752  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:29.816917  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:29.817194  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:29.817394  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:29.817588  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:30.013411  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.422860625s)
	I0108 21:03:30.013477  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:30.013487  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:30.013506  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.438908458s)
	I0108 21:03:30.013549  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:30.013591  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:30.013799  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:30.013827  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:30.013844  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:30.013850  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:30.013859  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:30.013865  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:30.013872  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:30.013876  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:30.013885  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:30.014202  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:30.014254  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:30.014267  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:30.014256  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:30.014204  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:30.014326  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:30.018757  342378 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 21:03:30.073140  342378 addons.go:237] Setting addon gcp-auth=true in "addons-417518"
	I0108 21:03:30.073205  342378 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:03:30.073695  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:30.073768  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:30.088776  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0108 21:03:30.089388  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:30.089914  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:30.089934  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:30.090395  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:30.091087  342378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:03:30.091129  342378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:03:30.106406  342378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0108 21:03:30.106876  342378 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:03:30.107384  342378 main.go:141] libmachine: Using API Version  1
	I0108 21:03:30.107410  342378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:03:30.107771  342378 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:03:30.108001  342378 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:03:30.109474  342378 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:03:30.109724  342378 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 21:03:30.109749  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:03:30.113046  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:30.113496  342378 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:03:30.113535  342378 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:03:30.113817  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:03:30.114008  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:03:30.114181  342378 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:03:30.114312  342378 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:03:31.992189  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.346690874s)
	I0108 21:03:31.992254  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.992268  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.992258  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.320295118s)
	I0108 21:03:31.992310  342378 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.302473972s)
	I0108 21:03:31.992340  342378 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 21:03:31.992362  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.244722943s)
	I0108 21:03:31.992318  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.992382  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.992400  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.992386  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.992505  342378 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (9.139406858s)
	I0108 21:03:31.992512  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.110474298s)
	I0108 21:03:31.992534  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.992549  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.992922  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.992937  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.992953  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.992963  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.992977  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.992996  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.992977  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.993122  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.993147  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.993161  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.993076  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.993186  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.993205  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.993217  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.993225  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.993228  342378 addons.go:473] Verifying addon registry=true in "addons-417518"
	I0108 21:03:31.993235  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.993089  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.994740  342378 out.go:177] * Verifying registry addon...
	I0108 21:03:31.995944  342378 node_ready.go:35] waiting up to 6m0s for node "addons-417518" to be "Ready" ...
	I0108 21:03:31.993436  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.994769  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.160020738s)
	W0108 21:03:31.996121  342378 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 21:03:31.996145  342378 retry.go:31] will retry after 219.850451ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 21:03:31.993475  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.996175  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.993525  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.993103  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.996256  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.996268  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.996277  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.994458  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.994979394s)
	I0108 21:03:31.996297  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.996309  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.996308  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.996321  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.996335  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.994526  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.635543388s)
	I0108 21:03:31.996386  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.996395  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.994605  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.431766601s)
	I0108 21:03:31.996440  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.996448  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.994659  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.353307633s)
	I0108 21:03:31.996503  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.996525  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.996525  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.996556  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.996569  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.994828  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.111451692s)
	I0108 21:03:31.996578  342378 addons.go:473] Verifying addon ingress=true in "addons-417518"
	I0108 21:03:31.996596  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.996609  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.998099  342378 out.go:177] * Verifying ingress addon...
	I0108 21:03:31.996657  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.996679  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.996928  342378 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 21:03:31.996961  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.996988  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.997020  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.997039  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.997200  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.997225  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.997470  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.997502  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:31.999444  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.999465  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.999476  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.999531  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.999541  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.999550  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.999586  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.999595  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.999608  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.999613  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:31.999630  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:31.999639  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:31.999782  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:31.999793  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.000522  342378 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 21:03:32.000650  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.000662  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:32.000670  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:32.000774  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:32.000782  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.000832  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:32.000852  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:32.000859  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.003423  342378 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-417518 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 21:03:32.001184  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:32.001208  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:32.003057  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:32.003083  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:32.004826  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.004836  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.004858  342378 addons.go:473] Verifying addon metrics-server=true in "addons-417518"
	I0108 21:03:32.026244  342378 node_ready.go:49] node "addons-417518" has status "Ready":"True"
	I0108 21:03:32.026272  342378 node_ready.go:38] duration metric: took 30.304956ms waiting for node "addons-417518" to be "Ready" ...
	I0108 21:03:32.026285  342378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:03:32.039910  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:32.039929  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:32.040204  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:32.040226  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	W0108 21:03:32.040327  342378 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0108 21:03:32.048291  342378 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 21:03:32.048314  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:32.063893  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:32.063918  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:32.064250  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:32.064273  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.064280  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:32.071287  342378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace to be "Ready" ...
	I0108 21:03:32.075046  342378 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 21:03:32.075064  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:32.216612  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 21:03:32.564877  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:32.568341  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:32.757774  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.628704628s)
	I0108 21:03:32.757828  342378 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.648084758s)
	I0108 21:03:32.757840  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:32.757857  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:32.759671  342378 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 21:03:32.758194  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:32.758250  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:32.759741  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.759759  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:32.759770  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:32.761256  342378 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 21:03:32.760061  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:32.760093  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:32.762629  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:32.762649  342378 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-417518"
	I0108 21:03:32.762679  342378 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 21:03:32.762701  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 21:03:32.764285  342378 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 21:03:32.766217  342378 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 21:03:32.842592  342378 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 21:03:32.842624  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 21:03:32.871432  342378 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 21:03:32.871465  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:32.956970  342378 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 21:03:32.957003  342378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 21:03:33.006107  342378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 21:03:33.033514  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:33.055565  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:33.292164  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:33.524510  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:33.524765  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:33.782253  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:34.011543  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:34.015674  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:34.088293  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:34.208737  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.99206906s)
	I0108 21:03:34.208812  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:34.208836  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:34.209177  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:34.209197  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:34.209208  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:34.209217  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:34.209555  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:34.209565  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:34.209574  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:34.286117  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:34.507906  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:34.513018  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:34.879305  342378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.873135123s)
	I0108 21:03:34.879388  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:34.879411  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:34.879793  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:34.879813  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:34.879825  342378 main.go:141] libmachine: Making call to close driver server
	I0108 21:03:34.879835  342378 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:03:34.879914  342378 main.go:141] libmachine: (addons-417518) DBG | Closing plugin on server side
	I0108 21:03:34.880104  342378 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:03:34.880152  342378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:03:34.881487  342378 addons.go:473] Verifying addon gcp-auth=true in "addons-417518"
	I0108 21:03:34.883343  342378 out.go:177] * Verifying gcp-auth addon...
	I0108 21:03:34.885638  342378 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 21:03:34.897203  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:34.944946  342378 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 21:03:34.944976  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:35.004163  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:35.011070  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:35.297653  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:35.401015  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:35.514319  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:35.517592  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:35.780284  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:35.901074  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:36.006368  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:36.007910  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:36.097377  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:36.273268  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:36.390336  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:36.507310  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:36.507506  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:36.779040  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:36.895398  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:37.006620  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:37.006875  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:37.273807  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:37.391326  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:37.506189  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:37.507761  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:37.773789  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:37.890386  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:38.009720  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:38.010194  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:38.277251  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:38.399888  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:38.511213  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:38.515037  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:38.578700  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:38.777395  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:38.890425  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:39.008638  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:39.008936  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:39.277419  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:39.389387  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:39.525063  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:39.525376  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:39.777888  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:39.910928  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:40.014845  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:40.039153  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:40.276458  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:40.391573  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:40.508396  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:40.513571  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:40.579282  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:40.774070  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:40.890227  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:41.010095  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:41.013237  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:41.277289  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:41.397102  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:41.524172  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:41.526636  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:41.783234  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:41.891320  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:42.277236  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:42.278898  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:42.289496  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:42.395646  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:42.509064  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:42.513294  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:42.579405  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:42.771939  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:42.901085  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:43.011867  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:43.013108  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:43.276688  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:43.391481  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:43.512500  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:43.513369  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:43.773056  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:43.894300  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:44.006455  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:44.008382  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:44.284294  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:44.396431  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:44.527771  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:44.554769  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:44.802075  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:44.813694  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:44.889448  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:45.005392  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:45.007765  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:45.276146  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:45.390013  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:45.522996  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:45.523633  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:45.774870  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:45.905621  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:46.006161  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:46.006231  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:46.280230  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:46.395287  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:46.505457  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:46.507022  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:46.773788  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:46.889372  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:47.005694  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:47.011507  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:47.337527  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:47.352460  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:47.398727  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:47.513366  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:47.514652  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:47.776978  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:47.889841  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:48.007826  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:48.012596  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:48.277957  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:48.391643  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:48.514789  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:48.515698  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:48.772858  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:48.889647  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:49.005882  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:49.007871  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:49.272340  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:49.391023  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:49.506032  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:49.506196  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:49.578866  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:49.779632  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:49.891771  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:50.005041  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:50.006295  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:50.272981  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:50.389938  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:50.798568  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:50.801101  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:50.803231  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:50.890302  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:51.006590  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:51.009023  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:51.273049  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:51.390054  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:51.505102  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:51.508259  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:51.611494  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:51.776996  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:51.890251  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:52.006070  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:52.008047  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:52.276329  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:52.390557  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:52.505337  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:52.506681  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:52.773008  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:52.893270  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:53.007518  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:53.008005  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:53.276657  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:53.389916  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:53.508404  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:53.512045  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:53.777926  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:53.891817  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:54.007091  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:54.007424  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:54.079116  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:54.275596  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:54.389854  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:54.505928  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:54.506142  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:54.773241  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:54.890151  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:55.005763  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:55.009122  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:55.273080  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:55.390808  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:55.505134  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:55.508629  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:55.773656  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:55.890801  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:56.006960  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:56.007105  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:56.273917  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:56.390931  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:56.504346  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:56.507703  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:56.578621  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:56.774033  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:56.892973  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:57.005992  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:57.006117  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:57.545755  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:57.548803  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:57.550218  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:57.551583  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:57.772945  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:57.889642  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:58.006556  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:58.007710  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:58.272707  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:58.390943  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:58.505758  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:58.505806  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:58.772554  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:58.890337  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:59.010590  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:59.018199  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:59.080280  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:03:59.274156  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:59.391950  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:03:59.507828  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:03:59.513995  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:03:59.772183  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:03:59.890089  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:00.008494  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:00.012926  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:00.296531  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:00.394828  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:00.504371  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:00.507645  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:00.773108  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:00.890425  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:01.006983  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:01.007456  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:01.187109  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:01.295495  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:01.390027  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:01.515094  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:01.515751  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:01.772500  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:01.892661  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:02.004498  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:02.006479  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:02.274186  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:02.390258  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:02.516343  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:02.517032  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:02.773454  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:02.997557  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:03.005222  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:03.006301  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:03.276453  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:03.392735  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:03.506168  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:03.506544  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:03.578874  342378 pod_ready.go:102] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:03.773672  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:03.889691  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:04.005281  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:04.005665  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:04.273576  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:04.389594  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:04.505498  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:04.505537  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:04.776044  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:04.898889  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:05.014081  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:05.018043  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:05.079814  342378 pod_ready.go:92] pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace has status "Ready":"True"
	I0108 21:04:05.079847  342378 pod_ready.go:81] duration metric: took 33.008534763s waiting for pod "coredns-5dd5756b68-c7lz8" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.079861  342378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-417518" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.088467  342378 pod_ready.go:92] pod "etcd-addons-417518" in "kube-system" namespace has status "Ready":"True"
	I0108 21:04:05.088500  342378 pod_ready.go:81] duration metric: took 8.62421ms waiting for pod "etcd-addons-417518" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.088514  342378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-417518" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.093659  342378 pod_ready.go:92] pod "kube-apiserver-addons-417518" in "kube-system" namespace has status "Ready":"True"
	I0108 21:04:05.093684  342378 pod_ready.go:81] duration metric: took 5.161131ms waiting for pod "kube-apiserver-addons-417518" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.093696  342378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-417518" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.098768  342378 pod_ready.go:92] pod "kube-controller-manager-addons-417518" in "kube-system" namespace has status "Ready":"True"
	I0108 21:04:05.098788  342378 pod_ready.go:81] duration metric: took 5.085373ms waiting for pod "kube-controller-manager-addons-417518" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.098802  342378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz2vh" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.104636  342378 pod_ready.go:92] pod "kube-proxy-nz2vh" in "kube-system" namespace has status "Ready":"True"
	I0108 21:04:05.104656  342378 pod_ready.go:81] duration metric: took 5.846279ms waiting for pod "kube-proxy-nz2vh" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.104667  342378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-417518" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.272394  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:05.390053  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:05.474638  342378 pod_ready.go:92] pod "kube-scheduler-addons-417518" in "kube-system" namespace has status "Ready":"True"
	I0108 21:04:05.474666  342378 pod_ready.go:81] duration metric: took 369.990952ms waiting for pod "kube-scheduler-addons-417518" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.474679  342378 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-cvgwj" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:05.505649  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:05.508190  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:05.772106  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:05.889680  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:06.004978  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:06.005774  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:06.272324  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:06.390094  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:06.507131  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:06.508199  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:06.775884  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:06.890180  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:07.005043  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:07.006423  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:07.272206  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:07.391801  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:07.484363  342378 pod_ready.go:102] pod "metrics-server-7c66d45ddc-cvgwj" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:07.505743  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:07.509407  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:07.775354  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:07.891392  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:08.005016  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:08.007556  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:08.316520  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:08.389197  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:08.506351  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:08.507566  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:08.772738  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:08.890532  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:09.013141  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:09.015614  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:09.277573  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:09.389467  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:09.482663  342378 pod_ready.go:92] pod "metrics-server-7c66d45ddc-cvgwj" in "kube-system" namespace has status "Ready":"True"
	I0108 21:04:09.482690  342378 pod_ready.go:81] duration metric: took 4.00800367s waiting for pod "metrics-server-7c66d45ddc-cvgwj" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:09.482699  342378 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fhphr" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:09.506190  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:09.507482  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:09.772555  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:09.890747  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:10.004120  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:10.008719  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:10.272402  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:10.390203  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:10.508418  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:10.521412  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:10.773186  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:10.891052  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:11.004321  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:11.006511  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:11.272718  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:11.389769  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:11.490217  342378 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-fhphr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:11.507939  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:11.508555  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:11.779644  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:11.891840  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:12.004283  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:12.007151  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:12.273797  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:12.496304  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:12.516208  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:12.525763  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:12.772116  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:12.890268  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:13.008755  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:13.009050  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:13.273170  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:13.390050  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:13.496917  342378 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-fhphr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:13.511698  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:13.511973  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:13.772323  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:13.890545  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:14.017833  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:14.019098  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:14.273333  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:14.453850  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:14.506401  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:14.506639  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:14.805295  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:14.890177  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:15.003772  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:15.006762  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:15.276327  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:15.390985  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:15.504865  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:15.506855  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:15.772766  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:15.890260  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:15.989698  342378 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-fhphr" in "kube-system" namespace has status "Ready":"False"
	I0108 21:04:16.011454  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:16.013132  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:16.272276  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:16.402237  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:16.493717  342378 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-fhphr" in "kube-system" namespace has status "Ready":"True"
	I0108 21:04:16.493745  342378 pod_ready.go:81] duration metric: took 7.011039704s waiting for pod "nvidia-device-plugin-daemonset-fhphr" in "kube-system" namespace to be "Ready" ...
	I0108 21:04:16.493763  342378 pod_ready.go:38] duration metric: took 44.467466269s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:04:16.493785  342378 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:04:16.493840  342378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:04:16.506561  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:16.507125  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:16.534403  342378 api_server.go:72] duration metric: took 53.684571464s to wait for apiserver process to appear ...
	I0108 21:04:16.534429  342378 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:04:16.534452  342378 api_server.go:253] Checking apiserver healthz at https://192.168.39.218:8443/healthz ...
	I0108 21:04:16.540140  342378 api_server.go:279] https://192.168.39.218:8443/healthz returned 200:
	ok
	I0108 21:04:16.541503  342378 api_server.go:141] control plane version: v1.28.4
	I0108 21:04:16.541527  342378 api_server.go:131] duration metric: took 7.091338ms to wait for apiserver health ...
	I0108 21:04:16.541535  342378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:04:16.551654  342378 system_pods.go:59] 18 kube-system pods found
	I0108 21:04:16.551693  342378 system_pods.go:61] "coredns-5dd5756b68-c7lz8" [97d20cf0-6829-4cf8-beca-87db8c588c41] Running
	I0108 21:04:16.551699  342378 system_pods.go:61] "csi-hostpath-attacher-0" [c084be7a-2255-4a72-b208-3f0e8c12824c] Running
	I0108 21:04:16.551703  342378 system_pods.go:61] "csi-hostpath-resizer-0" [473e8312-612a-4483-aeba-c4ac8dbadb8c] Running
	I0108 21:04:16.551713  342378 system_pods.go:61] "csi-hostpathplugin-7sjf6" [2736db7c-0b61-45e5-9010-c21d6b10319a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 21:04:16.551721  342378 system_pods.go:61] "etcd-addons-417518" [f72a124f-7912-413a-a448-b5a25a82ee6e] Running
	I0108 21:04:16.551733  342378 system_pods.go:61] "kube-apiserver-addons-417518" [5e7b64e8-8c25-463c-941f-9b51f6fa712a] Running
	I0108 21:04:16.551742  342378 system_pods.go:61] "kube-controller-manager-addons-417518" [ce90b4df-ef34-433b-ad49-a442c55d9cf4] Running
	I0108 21:04:16.551756  342378 system_pods.go:61] "kube-ingress-dns-minikube" [b0546a5b-757e-4851-9049-677f5d725202] Running
	I0108 21:04:16.551762  342378 system_pods.go:61] "kube-proxy-nz2vh" [20ba0d5f-0494-4cc2-9bee-f5d278e224d6] Running
	I0108 21:04:16.551769  342378 system_pods.go:61] "kube-scheduler-addons-417518" [0dc0a915-3a36-458d-9345-15124d3bfcf3] Running
	I0108 21:04:16.551777  342378 system_pods.go:61] "metrics-server-7c66d45ddc-cvgwj" [cacc38d2-0ddb-4fad-aab1-9d56fb63e65b] Running
	I0108 21:04:16.551783  342378 system_pods.go:61] "nvidia-device-plugin-daemonset-fhphr" [f86f2776-fb1d-4a75-8d29-8fcb306bd7cf] Running
	I0108 21:04:16.551792  342378 system_pods.go:61] "registry-proxy-sxr27" [b9ba0c2a-2815-46d7-a4ca-7b81a07d2778] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0108 21:04:16.551839  342378 system_pods.go:61] "registry-x6wr5" [87175079-3fbe-407b-b38d-1ef946385d32] Running
	I0108 21:04:16.551858  342378 system_pods.go:61] "snapshot-controller-58dbcc7b99-kdkk8" [80cc7e3f-28f5-44ca-90c4-c79508625163] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 21:04:16.551869  342378 system_pods.go:61] "snapshot-controller-58dbcc7b99-knklz" [a8284fe6-4887-4161-8b53-a1660d12a10d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 21:04:16.551881  342378 system_pods.go:61] "storage-provisioner" [6930dd02-4cc0-4a7b-ac48-ae16e451014e] Running
	I0108 21:04:16.551895  342378 system_pods.go:61] "tiller-deploy-7b677967b9-kfkkh" [a5f7ab68-b517-4693-acb4-fc7c512b7d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0108 21:04:16.551904  342378 system_pods.go:74] duration metric: took 10.362129ms to wait for pod list to return data ...
	I0108 21:04:16.551914  342378 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:04:16.554300  342378 default_sa.go:45] found service account: "default"
	I0108 21:04:16.554321  342378 default_sa.go:55] duration metric: took 2.399943ms for default service account to be created ...
	I0108 21:04:16.554327  342378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:04:16.575173  342378 system_pods.go:86] 18 kube-system pods found
	I0108 21:04:16.575205  342378 system_pods.go:89] "coredns-5dd5756b68-c7lz8" [97d20cf0-6829-4cf8-beca-87db8c588c41] Running
	I0108 21:04:16.575214  342378 system_pods.go:89] "csi-hostpath-attacher-0" [c084be7a-2255-4a72-b208-3f0e8c12824c] Running
	I0108 21:04:16.575220  342378 system_pods.go:89] "csi-hostpath-resizer-0" [473e8312-612a-4483-aeba-c4ac8dbadb8c] Running
	I0108 21:04:16.575230  342378 system_pods.go:89] "csi-hostpathplugin-7sjf6" [2736db7c-0b61-45e5-9010-c21d6b10319a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 21:04:16.575237  342378 system_pods.go:89] "etcd-addons-417518" [f72a124f-7912-413a-a448-b5a25a82ee6e] Running
	I0108 21:04:16.575243  342378 system_pods.go:89] "kube-apiserver-addons-417518" [5e7b64e8-8c25-463c-941f-9b51f6fa712a] Running
	I0108 21:04:16.575248  342378 system_pods.go:89] "kube-controller-manager-addons-417518" [ce90b4df-ef34-433b-ad49-a442c55d9cf4] Running
	I0108 21:04:16.575253  342378 system_pods.go:89] "kube-ingress-dns-minikube" [b0546a5b-757e-4851-9049-677f5d725202] Running
	I0108 21:04:16.575257  342378 system_pods.go:89] "kube-proxy-nz2vh" [20ba0d5f-0494-4cc2-9bee-f5d278e224d6] Running
	I0108 21:04:16.575263  342378 system_pods.go:89] "kube-scheduler-addons-417518" [0dc0a915-3a36-458d-9345-15124d3bfcf3] Running
	I0108 21:04:16.575267  342378 system_pods.go:89] "metrics-server-7c66d45ddc-cvgwj" [cacc38d2-0ddb-4fad-aab1-9d56fb63e65b] Running
	I0108 21:04:16.575270  342378 system_pods.go:89] "nvidia-device-plugin-daemonset-fhphr" [f86f2776-fb1d-4a75-8d29-8fcb306bd7cf] Running
	I0108 21:04:16.575276  342378 system_pods.go:89] "registry-proxy-sxr27" [b9ba0c2a-2815-46d7-a4ca-7b81a07d2778] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0108 21:04:16.575285  342378 system_pods.go:89] "registry-x6wr5" [87175079-3fbe-407b-b38d-1ef946385d32] Running
	I0108 21:04:16.575292  342378 system_pods.go:89] "snapshot-controller-58dbcc7b99-kdkk8" [80cc7e3f-28f5-44ca-90c4-c79508625163] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 21:04:16.575300  342378 system_pods.go:89] "snapshot-controller-58dbcc7b99-knklz" [a8284fe6-4887-4161-8b53-a1660d12a10d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 21:04:16.575304  342378 system_pods.go:89] "storage-provisioner" [6930dd02-4cc0-4a7b-ac48-ae16e451014e] Running
	I0108 21:04:16.575310  342378 system_pods.go:89] "tiller-deploy-7b677967b9-kfkkh" [a5f7ab68-b517-4693-acb4-fc7c512b7d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0108 21:04:16.575318  342378 system_pods.go:126] duration metric: took 20.985704ms to wait for k8s-apps to be running ...
	I0108 21:04:16.575327  342378 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:04:16.575391  342378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:04:16.603479  342378 system_svc.go:56] duration metric: took 28.139969ms WaitForService to wait for kubelet.
	I0108 21:04:16.603556  342378 kubeadm.go:581] duration metric: took 53.753724059s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:04:16.603602  342378 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:04:16.609676  342378 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:04:16.609704  342378 node_conditions.go:123] node cpu capacity is 2
	I0108 21:04:16.609716  342378 node_conditions.go:105] duration metric: took 6.108358ms to run NodePressure ...
	I0108 21:04:16.609728  342378 start.go:228] waiting for startup goroutines ...
	I0108 21:04:16.772897  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:16.890731  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:17.006202  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:17.008618  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:17.272704  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:17.389825  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:17.507473  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:17.510244  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:17.781976  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:17.890313  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:18.006093  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:18.007206  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:18.273172  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:18.391109  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:18.505620  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:18.505749  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:18.773456  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:18.891203  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:19.005734  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:19.006117  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:19.272398  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:19.390146  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:19.505978  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:19.506491  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:19.779035  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:19.893729  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:20.007821  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:20.010594  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:20.272667  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:20.396463  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:20.507691  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:20.511401  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:20.773736  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:20.896067  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:21.005206  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:21.007961  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:21.272086  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:21.392550  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:21.505472  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:21.505891  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:21.773630  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:21.890316  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:22.007485  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:22.016840  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:22.271708  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:22.408721  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:22.505448  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:22.506400  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:22.772834  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:22.889934  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:23.005588  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:23.008150  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:23.272733  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:23.395944  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:23.511644  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:23.514012  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:23.772804  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:23.890719  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:24.005510  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:24.005734  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:24.272574  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:24.390209  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:24.507666  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:24.509495  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:24.773559  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:24.889992  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:25.008695  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:25.014112  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:25.272886  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:25.389525  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:25.510745  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:25.510960  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:25.774291  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:25.891297  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:26.006412  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:26.009019  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:26.273865  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:26.389758  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:26.512208  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:26.519551  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:26.775491  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:26.890938  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:27.006474  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:27.007057  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:27.272866  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:27.389697  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:27.508056  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:27.508537  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:27.776490  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:27.889985  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:28.006227  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:28.007167  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:28.276240  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:28.390849  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:28.512230  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:28.515915  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:28.772195  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:28.890313  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:29.006922  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:29.007510  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:29.272528  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:29.390047  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:29.507596  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:29.508054  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:29.772249  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:29.889980  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:30.007491  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:30.008871  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:30.272687  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:30.389772  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:30.505549  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:30.505903  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:30.778033  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:30.889697  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:31.363626  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:31.363729  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:31.364092  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:31.399794  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:31.505898  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:31.508695  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:31.772968  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:31.890902  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:32.006501  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:32.006621  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:32.274358  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:32.390416  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:32.507091  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:32.509283  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:32.773508  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:32.893357  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:33.007588  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:33.009288  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:33.272703  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:33.389767  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:33.505636  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:33.505985  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:33.772568  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:33.891150  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:34.005510  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:34.005675  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:34.273390  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:34.390819  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:34.505086  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:34.506764  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:34.773482  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:34.889841  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:35.005020  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:35.006284  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:35.273309  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:35.689080  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:35.692661  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:35.692807  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:35.771964  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:35.889996  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:36.006111  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:36.006736  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:36.273538  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:36.409388  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:36.512990  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:36.514583  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 21:04:36.780575  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:36.897775  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:37.005787  342378 kapi.go:107] duration metric: took 1m5.008855924s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 21:04:37.008375  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:37.274071  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:37.390731  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:37.507617  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:37.774402  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:37.908307  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:38.018332  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:38.286386  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:38.394507  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:38.506176  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:38.777003  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:38.900917  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:39.008764  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:39.273934  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:39.395868  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:39.505768  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:39.774582  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:39.890185  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:40.009209  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:40.279282  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:40.390427  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:40.513636  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:40.773334  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:40.893039  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:41.006014  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:41.272703  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:41.389366  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:41.505733  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:41.774244  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:41.889691  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:42.005711  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:42.471333  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:42.481612  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:42.513713  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:42.774132  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:42.894353  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:43.005770  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:43.276136  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:43.389446  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:43.506143  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:43.772270  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:43.890364  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:44.006449  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:44.278313  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:44.390310  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:44.506905  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:44.773988  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:44.892181  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:45.005860  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:45.273299  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:45.390626  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:45.507459  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:45.774062  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:46.016725  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:46.019040  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:46.272589  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:46.389771  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:46.505344  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:46.772274  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:46.890419  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:47.006347  342378 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 21:04:47.273158  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:47.407482  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:47.506653  342378 kapi.go:107] duration metric: took 1m15.506125986s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 21:04:47.779398  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:47.890169  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:48.272229  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:48.390195  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:48.772862  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:48.889859  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:49.282190  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:49.398305  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:49.772319  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:49.890699  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:50.282713  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:50.390006  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:50.772899  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:50.889105  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 21:04:51.284630  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:51.390083  342378 kapi.go:107] duration metric: took 1m16.50443855s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 21:04:51.391890  342378 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-417518 cluster.
	I0108 21:04:51.394904  342378 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 21:04:51.396461  342378 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 21:04:51.775121  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:52.281901  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:52.773146  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:53.273369  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:53.772266  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:54.276883  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:54.773082  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:55.271873  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:55.774605  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:56.272650  342378 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 21:04:56.773298  342378 kapi.go:107] duration metric: took 1m24.007079735s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 21:04:56.775257  342378 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, inspektor-gadget, yakd, helm-tiller, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0108 21:04:56.776773  342378 addons.go:508] enable addons completed in 1m34.57956832s: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner inspektor-gadget yakd helm-tiller metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0108 21:04:56.776812  342378 start.go:233] waiting for cluster config update ...
	I0108 21:04:56.776866  342378 start.go:242] writing updated cluster config ...
	I0108 21:04:56.777134  342378 ssh_runner.go:195] Run: rm -f paused
	I0108 21:04:56.839726  342378 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:04:56.841489  342378 out.go:177] * Done! kubectl is now configured to use "addons-417518" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:02:37 UTC, ends at Mon 2024-01-08 21:07:44 UTC. --
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.679416432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=85d4fed8-8105-48b4-b58e-2364427cebfc name=/runtime.v1.RuntimeService/Version
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.680736305Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e1e3ab43-8fb1-4bcd-acb9-b3897d5dbb12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.682967082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748064682938590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=e1e3ab43-8fb1-4bcd-acb9-b3897d5dbb12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.686115946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2792df54-13f6-4c4c-b5d0-a0e92fbf3c44 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.686242379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2792df54-13f6-4c4c-b5d0-a0e92fbf3c44 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.686844228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:034c1524295366a2195a2dd9c0df29528d78dc6968bf5bf1b81a07dee7a021d7,PodSandboxId:cd48546d206db3a3f9832cfe8054d7ff0dac9a26648c0193c7f0863b85141647,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704748056494568567,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-p5nfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02b4cc66-9a40-43b0-8094-30ef0299344f,},Annotations:map[string]string{io.kubernetes.container.hash: eedf7f46,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4b5f783242c5110cacc01f6e260e0c4d2a9dbaa9836a00f047f3ecd09227ed,PodSandboxId:c3e9574b170890f2d703f73970c2e3b4107dee69d07ead8febf0081bf57c45ca,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704747933021780874,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-f4qhq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f7d385c9-f32a-465c-8a10-f00b1b199d34,},An
notations:map[string]string{io.kubernetes.container.hash: 1cca9060,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c34743119beb820e67412b39f831eab98ecb30732cda13300f5eb4d33fe8a0,PodSandboxId:baf57858ac7a3d51a2d41c22d69325de46437b5cdc5878dbdb7f096b28c16363,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704747915201419896,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 9362d43b-5fac-464e-8653-c188bc6b4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 334ce412,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4007dc85afd5c87ffb8646c450636d1c515336da41343e604d0f4f93723a0b,PodSandboxId:85012b875c819e02134f971812b4acc3363fa92177b3f210ebc6f7977244779d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704747890628108374,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-hxx78,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b18bb0ac-9a14-41c4-a44f-96e3d5d36d52,},Annotations:map[string]string{io.kubernetes.container.hash: c07008e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12ae1ac16817e17bbe3491726e3d171672df65ea6832736423a53a4d37feddcc,PodSandboxId:64b484f1c301b98f491ad0c0c9e8ef07fdea46f0a9d27afdd5dc5fca924a862d,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17047478
80361895883,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g7r6v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2817f066-66b4-4b1e-8987-3d3c38fe51f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe93d14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78176b4146ac960c54d8c9f7e951f34beae3d5808aa8d93f8401b15644050a1,PodSandboxId:011becaa93336db3410d09d386763971ee14d99fd8190e8ad3696e91bbac1e40,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704747863609293403,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zv29v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bbcc1e06-b59c-468e-a0ea-32ffcb385093,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1a2757,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9b3d7d116226067e20806133ef2c4051ecc5cb3ea07aecdc03f96dec5e2781,PodSandboxId:937e5b54caa1ef08787f8d11f01d660874b779288178e8f7e4dac94c6504082c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner
@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704747859788470361,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-96p49,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 44a23a7d-c77e-4159-af59-33dc6ad9f979,},Annotations:map[string]string{io.kubernetes.container.hash: cdebd85a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321f4894fdbe1ea088a3cccca01475c26718bd59e15ba23fdd2e6e1a01a0ca99,PodSandboxId:f126b8c499744c1d384433796a0e4186f6b7c18554431b8b1616055ceefb7d72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704747823310960058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6930dd02-4cc0-4a7b-ac48-ae16e451014e,},Annotations:map[string]string{io.kubernetes.container.hash: efb753bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff3ff324f6eb9b2a790682c639a6845b55e1cf34d95b80a2a24238f771c1374,PodSandboxId:b2fa33bfcbdfdc31e56b1593911c141f164f50594930e8cc201df5720f2e43dc,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},Imag
eRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704747823444692040,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gn4q5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 786efbea-f92d-4fb6-ab90-454c08ba2467,},Annotations:map[string]string{io.kubernetes.container.hash: 501bd69b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a45cf847174a1f42aacb82d3e8eca9e6f1d973df0708c3ea05b00b7d2c4e40,PodSandboxId:5478b5d196fd462f5173f7a27f8e8126a0b51f60fe11a03829d2997d7aa918b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c7
2967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704747809656110696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c7lz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d20cf0-6829-4cf8-beca-87db8c588c41,},Annotations:map[string]string{io.kubernetes.container.hash: c2813eb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe3c9a9549862c9f5275c0a791ceeb561158545b2e45c26458006cd5
f92b1ee,PodSandboxId:46b6f272ad6366ac66df527e693e1324e328533cbe2699f9cd556e3824053885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704747804641394436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nz2vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ba0d5f-0494-4cc2-9bee-f5d278e224d6,},Annotations:map[string]string{io.kubernetes.container.hash: b6d23901,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85bc8b6bca0b97544f8fff65b2cd34491c8ab231f43ceb27295beb87671b58de,PodSandboxId:03f2a4f3661237
8986caf561ca797e201421a316145fb74c091f4ae552e585cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704747781586514062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c6c42df332c80c3a84bfaadd4dc9,},Annotations:map[string]string{io.kubernetes.container.hash: f73f12f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff189d909c657baebf373a534b8f457e4023ac31804da84f4d29364cfd365e98,PodSandboxId:15f039de59688cbe74e0681fe00c80cf48459e809c08a8c2f32cc087a21eb1ba,Meta
data:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704747781463201565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30e8325e80e951b663cd73e01068405,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42632a4ef56a169586ebf17132302634eee52b8d17e4d70ff9631eef4ea1ec68,PodSandboxId:8b27f3765e79d43faf88d95c4b89ae70faa972d7d95be4cad174ad2d0b7439c1,Metadata:&Container
Metadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704747781425029287,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 924350b926d016c581fd6a48e0b43b23,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7305ca7bc73c5375c487af9c2468e16cce1100ca53d3ec5f3de06982359a6710,PodSandboxId:88fe8c0f14d110032b12926aff94eec3e60e27e6c1915b6a67e32859d7e4d0f
a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704747781203912528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b71de29e9ab538987364e1b3ad66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 23736642,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2792df54-13f6-4c4c-b5d0-a0e92fbf3c44 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.715799332Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c3fa8b43-7c9e-4c2e-a898-c4e29d01361b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.716233892Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cd48546d206db3a3f9832cfe8054d7ff0dac9a26648c0193c7f0863b85141647,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-p5nfj,Uid:02b4cc66-9a40-43b0-8094-30ef0299344f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704748054322165285,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-p5nfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02b4cc66-9a40-43b0-8094-30ef0299344f,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:07:33.685265028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3e9574b170890f2d703f73970c2e3b4107dee69d07ead8febf0081bf57c45ca,Metadata:&PodSandboxMetadata{Name:headlamp-7ddfbb94ff-f4qhq,Uid:f7d385c9-f32a-465c-8a10-f00b1b199d34,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747927116200005,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7ddfbb94ff-f4qhq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f7d385c9-f32a-465c-8a10-f00b1b199d34,pod-template-hash: 7ddfbb94ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:05:26.759711676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:baf57858ac7a3d51a2d41c22d69325de46437b5cdc5878dbdb7f096b28c16363,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9362d43b-5fac-464e-8653-c188bc6b4d90,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747910559501356,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9362d43b-5fac-464e-8653-c188bc6b4d90,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
01-08T21:05:10.211680904Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:85012b875c819e02134f971812b4acc3363fa92177b3f210ebc6f7977244779d,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-hxx78,Uid:b18bb0ac-9a14-41c4-a44f-96e3d5d36d52,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747878750169549,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-hxx78,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b18bb0ac-9a14-41c4-a44f-96e3d5d36d52,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:03:34.700579980Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2fa33bfcbdfdc31e56b1593911c141f164f50594930e8cc201df5720f2e43dc,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-gn4q5,Uid:786efbea-f92d-4fb6-ab90-454c08ba2467,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt
:1704747811117835249,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gn4q5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 786efbea-f92d-4fb6-ab90-454c08ba2467,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:03:30.779878102Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:937e5b54caa1ef08787f8d11f01d660874b779288178e8f7e4dac94c6504082c,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-78b46b4d5c-96p49,Uid:44a23a7d-c77e-4159-af59-33dc6ad9f979,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747810584400936,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-96p49,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.u
id: 44a23a7d-c77e-4159-af59-33dc6ad9f979,pod-template-hash: 78b46b4d5c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:03:30.224335006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f126b8c499744c1d384433796a0e4186f6b7c18554431b8b1616055ceefb7d72,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6930dd02-4cc0-4a7b-ac48-ae16e451014e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747810415919626,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6930dd02-4cc0-4a7b-ac48-ae16e451014e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"n
ame\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-08T21:03:30.071601693Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5478b5d196fd462f5173f7a27f8e8126a0b51f60fe11a03829d2997d7aa918b8,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-c7lz8,Uid:97d20cf0-6829-4cf8-beca-87db8c588c41,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747804884997374,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-c7lz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d20cf0-6829-4cf8-beca-87db8c588c41,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:03:23.049456983Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46b6f272ad6366ac66df527e693e1324e328533cbe2699f9cd556e3824053885,Metadata:&PodSandboxMetadata{Name:kube-proxy-nz2vh,Uid:20ba0d5f-0494-4cc2-9bee-f5d278e224d6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747803473626803,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nz2vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ba0d5f-0494-4cc2-9bee-f5d278e224d6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:03:22.541553506Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15f039de59688cbe74e0681fe00c80cf48459e809c08a8c2f32cc087a21eb1ba,Metadata:&PodSandboxMetadata{Na
me:kube-scheduler-addons-417518,Uid:a30e8325e80e951b663cd73e01068405,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747780748911230,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30e8325e80e951b663cd73e01068405,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a30e8325e80e951b663cd73e01068405,kubernetes.io/config.seen: 2024-01-08T21:03:00.207633456Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:03f2a4f36612378986caf561ca797e201421a316145fb74c091f4ae552e585cb,Metadata:&PodSandboxMetadata{Name:etcd-addons-417518,Uid:3d34c6c42df332c80c3a84bfaadd4dc9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747780730487367,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-417518,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 3d34c6c42df332c80c3a84bfaadd4dc9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.218:2379,kubernetes.io/config.hash: 3d34c6c42df332c80c3a84bfaadd4dc9,kubernetes.io/config.seen: 2024-01-08T21:03:00.207628051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88fe8c0f14d110032b12926aff94eec3e60e27e6c1915b6a67e32859d7e4d0fa,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-417518,Uid:08b71de29e9ab538987364e1b3ad66e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747780704767859,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b71de29e9ab538987364e1b3ad66e8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.218:8443,kubernetes.io/config.hash: 08
b71de29e9ab538987364e1b3ad66e8,kubernetes.io/config.seen: 2024-01-08T21:03:00.207631515Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b27f3765e79d43faf88d95c4b89ae70faa972d7d95be4cad174ad2d0b7439c1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-417518,Uid:924350b926d016c581fd6a48e0b43b23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704747780671997535,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 924350b926d016c581fd6a48e0b43b23,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 924350b926d016c581fd6a48e0b43b23,kubernetes.io/config.seen: 2024-01-08T21:03:00.207632641Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=c3fa8b43-7c9e-4c2e-a898-c4e29d01361b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.717275065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6527f5bc-647a-469a-8bf8-1657795f82f3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.717352061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6527f5bc-647a-469a-8bf8-1657795f82f3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.717780439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:034c1524295366a2195a2dd9c0df29528d78dc6968bf5bf1b81a07dee7a021d7,PodSandboxId:cd48546d206db3a3f9832cfe8054d7ff0dac9a26648c0193c7f0863b85141647,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704748056494568567,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-p5nfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02b4cc66-9a40-43b0-8094-30ef0299344f,},Annotations:map[string]string{io.kubernetes.container.hash: eedf7f46,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4b5f783242c5110cacc01f6e260e0c4d2a9dbaa9836a00f047f3ecd09227ed,PodSandboxId:c3e9574b170890f2d703f73970c2e3b4107dee69d07ead8febf0081bf57c45ca,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704747933021780874,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-f4qhq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f7d385c9-f32a-465c-8a10-f00b1b199d34,},An
notations:map[string]string{io.kubernetes.container.hash: 1cca9060,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c34743119beb820e67412b39f831eab98ecb30732cda13300f5eb4d33fe8a0,PodSandboxId:baf57858ac7a3d51a2d41c22d69325de46437b5cdc5878dbdb7f096b28c16363,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704747915201419896,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 9362d43b-5fac-464e-8653-c188bc6b4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 334ce412,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4007dc85afd5c87ffb8646c450636d1c515336da41343e604d0f4f93723a0b,PodSandboxId:85012b875c819e02134f971812b4acc3363fa92177b3f210ebc6f7977244779d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704747890628108374,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-hxx78,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b18bb0ac-9a14-41c4-a44f-96e3d5d36d52,},Annotations:map[string]string{io.kubernetes.container.hash: c07008e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9b3d7d116226067e20806133ef2c4051ecc5cb3ea07aecdc03f96dec5e2781,PodSandboxId:937e5b54caa1ef08787f8d11f01d660874b779288178e8f7e4dac94c6504082c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704747859788470361,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-96p49,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 44a23a7d-c77e-4159-af59-33dc6ad9f979,},Annotations:map[string]string{io.kubernetes.container.hash: cdebd85a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321f4894fdbe1ea088a3cccca01475c26718bd59e15ba23fdd2e6e1a01a0ca99,PodSandboxId:f126b8c499744c1d384433796a0e4186f6b7c18554431b8b1616055ceefb7d72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e
399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704747823310960058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6930dd02-4cc0-4a7b-ac48-ae16e451014e,},Annotations:map[string]string{io.kubernetes.container.hash: efb753bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff3ff324f6eb9b2a790682c639a6845b55e1cf34d95b80a2a24238f771c1374,PodSandboxId:b2fa33bfcbdfdc31e56b1593911c141f164f50594930e8cc201df5720f2e43dc,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605
311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704747823444692040,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gn4q5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 786efbea-f92d-4fb6-ab90-454c08ba2467,},Annotations:map[string]string{io.kubernetes.container.hash: 501bd69b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a45cf847174a1f42aacb82d3e8eca9e6f1d973df0708c3ea05b00b7d2c4e40,PodSandboxId:5478b5d196fd462f5173f7a27f8e8126a0b51f60fe11a03829d2997d7aa918b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{
},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704747809656110696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c7lz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d20cf0-6829-4cf8-beca-87db8c588c41,},Annotations:map[string]string{io.kubernetes.container.hash: c2813eb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe3c9a9549862c9f5275c0a791ceeb561158545b2e45c26458006cd5f92b1ee,PodSandboxId:46b6f272ad6366ac66df527e693e1324e32853
3cbe2699f9cd556e3824053885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704747804641394436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nz2vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ba0d5f-0494-4cc2-9bee-f5d278e224d6,},Annotations:map[string]string{io.kubernetes.container.hash: b6d23901,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85bc8b6bca0b97544f8fff65b2cd34491c8ab231f43ceb27295beb87671b58de,PodSandboxId:03f2a4f36612378986caf561ca797e201421a316145fb74c091f4ae552e585cb,Metadata
:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704747781586514062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c6c42df332c80c3a84bfaadd4dc9,},Annotations:map[string]string{io.kubernetes.container.hash: f73f12f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff189d909c657baebf373a534b8f457e4023ac31804da84f4d29364cfd365e98,PodSandboxId:15f039de59688cbe74e0681fe00c80cf48459e809c08a8c2f32cc087a21eb1ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Ima
ge:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704747781463201565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30e8325e80e951b663cd73e01068405,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42632a4ef56a169586ebf17132302634eee52b8d17e4d70ff9631eef4ea1ec68,PodSandboxId:8b27f3765e79d43faf88d95c4b89ae70faa972d7d95be4cad174ad2d0b7439c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&Im
ageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704747781425029287,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 924350b926d016c581fd6a48e0b43b23,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7305ca7bc73c5375c487af9c2468e16cce1100ca53d3ec5f3de06982359a6710,PodSandboxId:88fe8c0f14d110032b12926aff94eec3e60e27e6c1915b6a67e32859d7e4d0fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0
,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704747781203912528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b71de29e9ab538987364e1b3ad66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 23736642,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6527f5bc-647a-469a-8bf8-1657795f82f3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.726986308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d7f0b46f-cf1d-412d-8473-5b9b9e3813bb name=/runtime.v1.RuntimeService/Version
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.727130750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d7f0b46f-cf1d-412d-8473-5b9b9e3813bb name=/runtime.v1.RuntimeService/Version
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.729633861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0b4b5a7c-93be-4f4c-8f68-7eef2ba80074 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.730911726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748064730896455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=0b4b5a7c-93be-4f4c-8f68-7eef2ba80074 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.731964783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=97ce123c-e9df-4279-8a14-0ad576800b18 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.732010723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=97ce123c-e9df-4279-8a14-0ad576800b18 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.732543580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:034c1524295366a2195a2dd9c0df29528d78dc6968bf5bf1b81a07dee7a021d7,PodSandboxId:cd48546d206db3a3f9832cfe8054d7ff0dac9a26648c0193c7f0863b85141647,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704748056494568567,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-p5nfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02b4cc66-9a40-43b0-8094-30ef0299344f,},Annotations:map[string]string{io.kubernetes.container.hash: eedf7f46,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4b5f783242c5110cacc01f6e260e0c4d2a9dbaa9836a00f047f3ecd09227ed,PodSandboxId:c3e9574b170890f2d703f73970c2e3b4107dee69d07ead8febf0081bf57c45ca,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704747933021780874,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-f4qhq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f7d385c9-f32a-465c-8a10-f00b1b199d34,},An
notations:map[string]string{io.kubernetes.container.hash: 1cca9060,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c34743119beb820e67412b39f831eab98ecb30732cda13300f5eb4d33fe8a0,PodSandboxId:baf57858ac7a3d51a2d41c22d69325de46437b5cdc5878dbdb7f096b28c16363,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704747915201419896,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 9362d43b-5fac-464e-8653-c188bc6b4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 334ce412,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4007dc85afd5c87ffb8646c450636d1c515336da41343e604d0f4f93723a0b,PodSandboxId:85012b875c819e02134f971812b4acc3363fa92177b3f210ebc6f7977244779d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704747890628108374,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-hxx78,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b18bb0ac-9a14-41c4-a44f-96e3d5d36d52,},Annotations:map[string]string{io.kubernetes.container.hash: c07008e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12ae1ac16817e17bbe3491726e3d171672df65ea6832736423a53a4d37feddcc,PodSandboxId:64b484f1c301b98f491ad0c0c9e8ef07fdea46f0a9d27afdd5dc5fca924a862d,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17047478
80361895883,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g7r6v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2817f066-66b4-4b1e-8987-3d3c38fe51f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe93d14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78176b4146ac960c54d8c9f7e951f34beae3d5808aa8d93f8401b15644050a1,PodSandboxId:011becaa93336db3410d09d386763971ee14d99fd8190e8ad3696e91bbac1e40,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704747863609293403,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zv29v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bbcc1e06-b59c-468e-a0ea-32ffcb385093,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1a2757,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9b3d7d116226067e20806133ef2c4051ecc5cb3ea07aecdc03f96dec5e2781,PodSandboxId:937e5b54caa1ef08787f8d11f01d660874b779288178e8f7e4dac94c6504082c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner
@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704747859788470361,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-96p49,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 44a23a7d-c77e-4159-af59-33dc6ad9f979,},Annotations:map[string]string{io.kubernetes.container.hash: cdebd85a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321f4894fdbe1ea088a3cccca01475c26718bd59e15ba23fdd2e6e1a01a0ca99,PodSandboxId:f126b8c499744c1d384433796a0e4186f6b7c18554431b8b1616055ceefb7d72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704747823310960058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6930dd02-4cc0-4a7b-ac48-ae16e451014e,},Annotations:map[string]string{io.kubernetes.container.hash: efb753bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff3ff324f6eb9b2a790682c639a6845b55e1cf34d95b80a2a24238f771c1374,PodSandboxId:b2fa33bfcbdfdc31e56b1593911c141f164f50594930e8cc201df5720f2e43dc,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},Imag
eRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704747823444692040,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gn4q5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 786efbea-f92d-4fb6-ab90-454c08ba2467,},Annotations:map[string]string{io.kubernetes.container.hash: 501bd69b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a45cf847174a1f42aacb82d3e8eca9e6f1d973df0708c3ea05b00b7d2c4e40,PodSandboxId:5478b5d196fd462f5173f7a27f8e8126a0b51f60fe11a03829d2997d7aa918b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c7
2967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704747809656110696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c7lz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d20cf0-6829-4cf8-beca-87db8c588c41,},Annotations:map[string]string{io.kubernetes.container.hash: c2813eb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe3c9a9549862c9f5275c0a791ceeb561158545b2e45c26458006cd5
f92b1ee,PodSandboxId:46b6f272ad6366ac66df527e693e1324e328533cbe2699f9cd556e3824053885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704747804641394436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nz2vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ba0d5f-0494-4cc2-9bee-f5d278e224d6,},Annotations:map[string]string{io.kubernetes.container.hash: b6d23901,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85bc8b6bca0b97544f8fff65b2cd34491c8ab231f43ceb27295beb87671b58de,PodSandboxId:03f2a4f3661237
8986caf561ca797e201421a316145fb74c091f4ae552e585cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704747781586514062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c6c42df332c80c3a84bfaadd4dc9,},Annotations:map[string]string{io.kubernetes.container.hash: f73f12f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff189d909c657baebf373a534b8f457e4023ac31804da84f4d29364cfd365e98,PodSandboxId:15f039de59688cbe74e0681fe00c80cf48459e809c08a8c2f32cc087a21eb1ba,Meta
data:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704747781463201565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30e8325e80e951b663cd73e01068405,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42632a4ef56a169586ebf17132302634eee52b8d17e4d70ff9631eef4ea1ec68,PodSandboxId:8b27f3765e79d43faf88d95c4b89ae70faa972d7d95be4cad174ad2d0b7439c1,Metadata:&Container
Metadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704747781425029287,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 924350b926d016c581fd6a48e0b43b23,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7305ca7bc73c5375c487af9c2468e16cce1100ca53d3ec5f3de06982359a6710,PodSandboxId:88fe8c0f14d110032b12926aff94eec3e60e27e6c1915b6a67e32859d7e4d0f
a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704747781203912528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b71de29e9ab538987364e1b3ad66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 23736642,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=97ce123c-e9df-4279-8a14-0ad576800b18 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.770757504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=22e69780-279c-4535-8b4c-4fce6a80125b name=/runtime.v1.RuntimeService/Version
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.770836486Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=22e69780-279c-4535-8b4c-4fce6a80125b name=/runtime.v1.RuntimeService/Version
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.772189149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ab0abfad-9460-4eda-8aa1-93f9ade5c887 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.773429264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748064773414283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=ab0abfad-9460-4eda-8aa1-93f9ade5c887 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.774028526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f7b8aa71-aad8-421f-85e2-911ebb536244 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.774223046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f7b8aa71-aad8-421f-85e2-911ebb536244 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:07:44 addons-417518 crio[720]: time="2024-01-08 21:07:44.774611370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:034c1524295366a2195a2dd9c0df29528d78dc6968bf5bf1b81a07dee7a021d7,PodSandboxId:cd48546d206db3a3f9832cfe8054d7ff0dac9a26648c0193c7f0863b85141647,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704748056494568567,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-p5nfj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02b4cc66-9a40-43b0-8094-30ef0299344f,},Annotations:map[string]string{io.kubernetes.container.hash: eedf7f46,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4b5f783242c5110cacc01f6e260e0c4d2a9dbaa9836a00f047f3ecd09227ed,PodSandboxId:c3e9574b170890f2d703f73970c2e3b4107dee69d07ead8febf0081bf57c45ca,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704747933021780874,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-f4qhq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f7d385c9-f32a-465c-8a10-f00b1b199d34,},An
notations:map[string]string{io.kubernetes.container.hash: 1cca9060,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c34743119beb820e67412b39f831eab98ecb30732cda13300f5eb4d33fe8a0,PodSandboxId:baf57858ac7a3d51a2d41c22d69325de46437b5cdc5878dbdb7f096b28c16363,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704747915201419896,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 9362d43b-5fac-464e-8653-c188bc6b4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 334ce412,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4007dc85afd5c87ffb8646c450636d1c515336da41343e604d0f4f93723a0b,PodSandboxId:85012b875c819e02134f971812b4acc3363fa92177b3f210ebc6f7977244779d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704747890628108374,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-hxx78,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b18bb0ac-9a14-41c4-a44f-96e3d5d36d52,},Annotations:map[string]string{io.kubernetes.container.hash: c07008e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12ae1ac16817e17bbe3491726e3d171672df65ea6832736423a53a4d37feddcc,PodSandboxId:64b484f1c301b98f491ad0c0c9e8ef07fdea46f0a9d27afdd5dc5fca924a862d,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17047478
80361895883,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g7r6v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2817f066-66b4-4b1e-8987-3d3c38fe51f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe93d14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78176b4146ac960c54d8c9f7e951f34beae3d5808aa8d93f8401b15644050a1,PodSandboxId:011becaa93336db3410d09d386763971ee14d99fd8190e8ad3696e91bbac1e40,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704747863609293403,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zv29v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bbcc1e06-b59c-468e-a0ea-32ffcb385093,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1a2757,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9b3d7d116226067e20806133ef2c4051ecc5cb3ea07aecdc03f96dec5e2781,PodSandboxId:937e5b54caa1ef08787f8d11f01d660874b779288178e8f7e4dac94c6504082c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner
@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704747859788470361,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-96p49,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 44a23a7d-c77e-4159-af59-33dc6ad9f979,},Annotations:map[string]string{io.kubernetes.container.hash: cdebd85a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321f4894fdbe1ea088a3cccca01475c26718bd59e15ba23fdd2e6e1a01a0ca99,PodSandboxId:f126b8c499744c1d384433796a0e4186f6b7c18554431b8b1616055ceefb7d72,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704747823310960058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6930dd02-4cc0-4a7b-ac48-ae16e451014e,},Annotations:map[string]string{io.kubernetes.container.hash: efb753bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff3ff324f6eb9b2a790682c639a6845b55e1cf34d95b80a2a24238f771c1374,PodSandboxId:b2fa33bfcbdfdc31e56b1593911c141f164f50594930e8cc201df5720f2e43dc,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},Imag
eRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704747823444692040,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gn4q5,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 786efbea-f92d-4fb6-ab90-454c08ba2467,},Annotations:map[string]string{io.kubernetes.container.hash: 501bd69b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a45cf847174a1f42aacb82d3e8eca9e6f1d973df0708c3ea05b00b7d2c4e40,PodSandboxId:5478b5d196fd462f5173f7a27f8e8126a0b51f60fe11a03829d2997d7aa918b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c7
2967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704747809656110696,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c7lz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d20cf0-6829-4cf8-beca-87db8c588c41,},Annotations:map[string]string{io.kubernetes.container.hash: c2813eb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffe3c9a9549862c9f5275c0a791ceeb561158545b2e45c26458006cd5
f92b1ee,PodSandboxId:46b6f272ad6366ac66df527e693e1324e328533cbe2699f9cd556e3824053885,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704747804641394436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nz2vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ba0d5f-0494-4cc2-9bee-f5d278e224d6,},Annotations:map[string]string{io.kubernetes.container.hash: b6d23901,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85bc8b6bca0b97544f8fff65b2cd34491c8ab231f43ceb27295beb87671b58de,PodSandboxId:03f2a4f3661237
8986caf561ca797e201421a316145fb74c091f4ae552e585cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704747781586514062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c6c42df332c80c3a84bfaadd4dc9,},Annotations:map[string]string{io.kubernetes.container.hash: f73f12f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff189d909c657baebf373a534b8f457e4023ac31804da84f4d29364cfd365e98,PodSandboxId:15f039de59688cbe74e0681fe00c80cf48459e809c08a8c2f32cc087a21eb1ba,Meta
data:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704747781463201565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30e8325e80e951b663cd73e01068405,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42632a4ef56a169586ebf17132302634eee52b8d17e4d70ff9631eef4ea1ec68,PodSandboxId:8b27f3765e79d43faf88d95c4b89ae70faa972d7d95be4cad174ad2d0b7439c1,Metadata:&Container
Metadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704747781425029287,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 924350b926d016c581fd6a48e0b43b23,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7305ca7bc73c5375c487af9c2468e16cce1100ca53d3ec5f3de06982359a6710,PodSandboxId:88fe8c0f14d110032b12926aff94eec3e60e27e6c1915b6a67e32859d7e4d0f
a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704747781203912528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-417518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b71de29e9ab538987364e1b3ad66e8,},Annotations:map[string]string{io.kubernetes.container.hash: 23736642,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7b8aa71-aad8-421f-85e2-911ebb536244 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	034c152429536       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   cd48546d206db       hello-world-app-5d77478584-p5nfj
	ae4b5f783242c       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   c3e9574b17089       headlamp-7ddfbb94ff-f4qhq
	d2c34743119be       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   baf57858ac7a3       nginx
	8d4007dc85afd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   85012b875c819       gcp-auth-d4c87556c-hxx78
	12ae1ac16817e       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     2                   64b484f1c301b       ingress-nginx-admission-patch-g7r6v
	d78176b4146ac       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   011becaa93336       ingress-nginx-admission-create-zv29v
	6b9b3d7d11622       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   937e5b54caa1e       local-path-provisioner-78b46b4d5c-96p49
	aff3ff324f6eb       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   b2fa33bfcbdfd       yakd-dashboard-9947fc6bf-gn4q5
	321f4894fdbe1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   f126b8c499744       storage-provisioner
	96a45cf847174       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   5478b5d196fd4       coredns-5dd5756b68-c7lz8
	ffe3c9a954986       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   46b6f272ad636       kube-proxy-nz2vh
	85bc8b6bca0b9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   03f2a4f366123       etcd-addons-417518
	ff189d909c657       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   15f039de59688       kube-scheduler-addons-417518
	42632a4ef56a1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   8b27f3765e79d       kube-controller-manager-addons-417518
	7305ca7bc73c5       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   88fe8c0f14d11       kube-apiserver-addons-417518
	
	
	==> coredns [96a45cf847174a1f42aacb82d3e8eca9e6f1d973df0708c3ea05b00b7d2c4e40] <==
	[INFO] 10.244.0.9:50322 - 48754 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097019s
	[INFO] 10.244.0.9:45300 - 15659 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000149696s
	[INFO] 10.244.0.9:45300 - 41513 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000096754s
	[INFO] 10.244.0.9:41485 - 42375 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000182312s
	[INFO] 10.244.0.9:41485 - 61317 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063236s
	[INFO] 10.244.0.9:54484 - 29252 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000198968s
	[INFO] 10.244.0.9:54484 - 33606 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000262935s
	[INFO] 10.244.0.9:43643 - 17036 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000091794s
	[INFO] 10.244.0.9:43643 - 13696 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049792s
	[INFO] 10.244.0.9:42935 - 62019 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049506s
	[INFO] 10.244.0.9:42935 - 8000 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031247s
	[INFO] 10.244.0.9:54077 - 25828 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148793s
	[INFO] 10.244.0.9:54077 - 23015 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000355832s
	[INFO] 10.244.0.9:56219 - 12558 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000038862s
	[INFO] 10.244.0.9:56219 - 46385 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050335s
	[INFO] 10.244.0.21:52988 - 51254 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000727203s
	[INFO] 10.244.0.21:57772 - 37157 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284822s
	[INFO] 10.244.0.21:59990 - 18577 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000510555s
	[INFO] 10.244.0.21:42436 - 20211 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000076253s
	[INFO] 10.244.0.21:58076 - 57923 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008576s
	[INFO] 10.244.0.21:54794 - 50927 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116463s
	[INFO] 10.244.0.21:34196 - 3795 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000897243s
	[INFO] 10.244.0.21:40624 - 42474 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000553616s
	[INFO] 10.244.0.24:33120 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000808091s
	[INFO] 10.244.0.24:58007 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00024744s
	
	
	==> describe nodes <==
	Name:               addons-417518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-417518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=addons-417518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_03_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-417518
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:03:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-417518
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:07:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:06:13 +0000   Mon, 08 Jan 2024 21:03:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:06:13 +0000   Mon, 08 Jan 2024 21:03:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:06:13 +0000   Mon, 08 Jan 2024 21:03:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:06:13 +0000   Mon, 08 Jan 2024 21:03:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    addons-417518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1981f9bfd0c42c888b784882656f02b
	  System UUID:                d1981f9b-fd0c-42c8-88b7-84882656f02b
	  Boot ID:                    06ec7cbd-c677-4edb-8963-a6972ab9306b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-p5nfj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-d4c87556c-hxx78                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  headlamp                    headlamp-7ddfbb94ff-f4qhq                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 coredns-5dd5756b68-c7lz8                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m23s
	  kube-system                 etcd-addons-417518                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-apiserver-addons-417518               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-controller-manager-addons-417518      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-proxy-nz2vh                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-scheduler-addons-417518               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  local-path-storage          local-path-provisioner-78b46b4d5c-96p49    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-gn4q5             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m45s (x8 over 4m45s)  kubelet          Node addons-417518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s (x8 over 4m45s)  kubelet          Node addons-417518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s (x7 over 4m45s)  kubelet          Node addons-417518 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m37s                  kubelet          Node addons-417518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s                  kubelet          Node addons-417518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s                  kubelet          Node addons-417518 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m36s                  kubelet          Node addons-417518 status is now: NodeReady
	  Normal  RegisteredNode           4m24s                  node-controller  Node addons-417518 event: Registered Node addons-417518 in Controller
	
	
	==> dmesg <==
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.077867] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.600566] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.111506] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.147262] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.105746] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.214042] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +9.860811] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[Jan 8 21:03] systemd-fstab-generator[1246]: Ignoring "noauto" for root device
	[ +21.536870] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.329779] kauditd_printk_skb: 54 callbacks suppressed
	[  +9.930111] kauditd_printk_skb: 26 callbacks suppressed
	[Jan 8 21:04] kauditd_printk_skb: 22 callbacks suppressed
	[ +35.035884] kauditd_printk_skb: 27 callbacks suppressed
	[Jan 8 21:05] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.645286] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.863360] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.896503] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.540343] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.372719] kauditd_printk_skb: 4 callbacks suppressed
	[ +14.495825] kauditd_printk_skb: 6 callbacks suppressed
	[Jan 8 21:06] kauditd_printk_skb: 12 callbacks suppressed
	[Jan 8 21:07] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [85bc8b6bca0b97544f8fff65b2cd34491c8ab231f43ceb27295beb87671b58de] <==
	{"level":"info","ts":"2024-01-08T21:04:42.456186Z","caller":"traceutil/trace.go:171","msg":"trace[2103883168] linearizableReadLoop","detail":"{readStateIndex:1173; appliedIndex:1172; }","duration":"187.629424ms","start":"2024-01-08T21:04:42.268545Z","end":"2024-01-08T21:04:42.456174Z","steps":["trace[2103883168] 'read index received'  (duration: 179.986409ms)","trace[2103883168] 'applied index is now lower than readState.Index'  (duration: 7.642196ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:04:42.456288Z","caller":"traceutil/trace.go:171","msg":"trace[219053009] transaction","detail":"{read_only:false; response_revision:1134; number_of_response:1; }","duration":"283.300758ms","start":"2024-01-08T21:04:42.17298Z","end":"2024-01-08T21:04:42.456281Z","steps":["trace[219053009] 'process raft request'  (duration: 282.913993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:04:42.457215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.728894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82382"}
	{"level":"info","ts":"2024-01-08T21:04:42.457307Z","caller":"traceutil/trace.go:171","msg":"trace[200717891] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1134; }","duration":"188.833885ms","start":"2024-01-08T21:04:42.268464Z","end":"2024-01-08T21:04:42.457298Z","steps":["trace[200717891] 'agreement among raft nodes before linearized reading'  (duration: 187.894608ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:04:46.010707Z","caller":"traceutil/trace.go:171","msg":"trace[251388292] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1184; }","duration":"126.00338ms","start":"2024-01-08T21:04:45.884689Z","end":"2024-01-08T21:04:46.010693Z","steps":["trace[251388292] 'read index received'  (duration: 125.853661ms)","trace[251388292] 'applied index is now lower than readState.Index'  (duration: 148.98µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:04:46.011099Z","caller":"traceutil/trace.go:171","msg":"trace[39320171] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"242.353614ms","start":"2024-01-08T21:04:45.768678Z","end":"2024-01-08T21:04:46.011032Z","steps":["trace[39320171] 'process raft request'  (duration: 241.900816ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:04:46.011349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.665734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10951"}
	{"level":"info","ts":"2024-01-08T21:04:46.011415Z","caller":"traceutil/trace.go:171","msg":"trace[1586389718] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1145; }","duration":"126.741961ms","start":"2024-01-08T21:04:45.884664Z","end":"2024-01-08T21:04:46.011406Z","steps":["trace[1586389718] 'agreement among raft nodes before linearized reading'  (duration: 126.63575ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:05:10.054754Z","caller":"traceutil/trace.go:171","msg":"trace[1850153213] transaction","detail":"{read_only:false; response_revision:1340; number_of_response:1; }","duration":"184.383133ms","start":"2024-01-08T21:05:09.870318Z","end":"2024-01-08T21:05:10.054701Z","steps":["trace[1850153213] 'process raft request'  (duration: 184.002783ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:05:32.905323Z","caller":"traceutil/trace.go:171","msg":"trace[1676176348] transaction","detail":"{read_only:false; response_revision:1499; number_of_response:1; }","duration":"447.716319ms","start":"2024-01-08T21:05:32.457584Z","end":"2024-01-08T21:05:32.9053Z","steps":["trace[1676176348] 'process raft request'  (duration: 447.38213ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:05:32.905584Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:05:32.45757Z","time spent":"447.873848ms","remote":"127.0.0.1:55640","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1498 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-08T21:05:32.911967Z","caller":"traceutil/trace.go:171","msg":"trace[2018765860] linearizableReadLoop","detail":"{readStateIndex:1558; appliedIndex:1556; }","duration":"144.042327ms","start":"2024-01-08T21:05:32.767915Z","end":"2024-01-08T21:05:32.911957Z","steps":["trace[2018765860] 'read index received'  (duration: 137.419858ms)","trace[2018765860] 'applied index is now lower than readState.Index'  (duration: 6.621747ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:05:32.912146Z","caller":"traceutil/trace.go:171","msg":"trace[1697175375] transaction","detail":"{read_only:false; response_revision:1500; number_of_response:1; }","duration":"439.61888ms","start":"2024-01-08T21:05:32.472512Z","end":"2024-01-08T21:05:32.912131Z","steps":["trace[1697175375] 'process raft request'  (duration: 439.31117ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:05:32.912257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.57558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2024-01-08T21:05:32.912373Z","caller":"traceutil/trace.go:171","msg":"trace[839860823] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1500; }","duration":"130.687355ms","start":"2024-01-08T21:05:32.781677Z","end":"2024-01-08T21:05:32.912365Z","steps":["trace[839860823] 'agreement among raft nodes before linearized reading'  (duration: 130.559456ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:05:32.912482Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:05:32.472498Z","time spent":"439.788339ms","remote":"127.0.0.1:55662","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-417518\" mod_revision:1448 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-417518\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-417518\" > >"}
	{"level":"warn","ts":"2024-01-08T21:05:32.912587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.61872ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:05:32.912629Z","caller":"traceutil/trace.go:171","msg":"trace[1102449992] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1500; }","duration":"101.656651ms","start":"2024-01-08T21:05:32.81096Z","end":"2024-01-08T21:05:32.912617Z","steps":["trace[1102449992] 'agreement among raft nodes before linearized reading'  (duration: 101.605426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:05:32.912218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.302251ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-01-08T21:05:32.912903Z","caller":"traceutil/trace.go:171","msg":"trace[103148443] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1500; }","duration":"145.004185ms","start":"2024-01-08T21:05:32.767892Z","end":"2024-01-08T21:05:32.912896Z","steps":["trace[103148443] 'agreement among raft nodes before linearized reading'  (duration: 144.275325ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:05:54.922415Z","caller":"traceutil/trace.go:171","msg":"trace[402865837] linearizableReadLoop","detail":"{readStateIndex:1642; appliedIndex:1641; }","duration":"111.933861ms","start":"2024-01-08T21:05:54.810462Z","end":"2024-01-08T21:05:54.922396Z","steps":["trace[402865837] 'read index received'  (duration: 111.775287ms)","trace[402865837] 'applied index is now lower than readState.Index'  (duration: 157.707µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:05:54.923875Z","caller":"traceutil/trace.go:171","msg":"trace[1135056417] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"163.453506ms","start":"2024-01-08T21:05:54.760298Z","end":"2024-01-08T21:05:54.923751Z","steps":["trace[1135056417] 'process raft request'  (duration: 162.00311ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:05:54.924867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.339067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:05:54.92492Z","caller":"traceutil/trace.go:171","msg":"trace[236654132] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1578; }","duration":"114.476364ms","start":"2024-01-08T21:05:54.81043Z","end":"2024-01-08T21:05:54.924906Z","steps":["trace[236654132] 'agreement among raft nodes before linearized reading'  (duration: 112.084895ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:06:00.079415Z","caller":"traceutil/trace.go:171","msg":"trace[832789773] transaction","detail":"{read_only:false; response_revision:1589; number_of_response:1; }","duration":"142.385616ms","start":"2024-01-08T21:05:59.937015Z","end":"2024-01-08T21:06:00.079401Z","steps":["trace[832789773] 'process raft request'  (duration: 142.25685ms)"],"step_count":1}
	
	
	==> gcp-auth [8d4007dc85afd5c87ffb8646c450636d1c515336da41343e604d0f4f93723a0b] <==
	2024/01/08 21:04:50 GCP Auth Webhook started!
	2024/01/08 21:04:57 Ready to marshal response ...
	2024/01/08 21:04:57 Ready to write response ...
	2024/01/08 21:04:57 Ready to marshal response ...
	2024/01/08 21:04:57 Ready to write response ...
	2024/01/08 21:05:08 Ready to marshal response ...
	2024/01/08 21:05:08 Ready to write response ...
	2024/01/08 21:05:08 Ready to marshal response ...
	2024/01/08 21:05:08 Ready to write response ...
	2024/01/08 21:05:10 Ready to marshal response ...
	2024/01/08 21:05:10 Ready to write response ...
	2024/01/08 21:05:26 Ready to marshal response ...
	2024/01/08 21:05:26 Ready to write response ...
	2024/01/08 21:05:26 Ready to marshal response ...
	2024/01/08 21:05:26 Ready to write response ...
	2024/01/08 21:05:26 Ready to marshal response ...
	2024/01/08 21:05:26 Ready to write response ...
	2024/01/08 21:05:27 Ready to marshal response ...
	2024/01/08 21:05:27 Ready to write response ...
	2024/01/08 21:05:48 Ready to marshal response ...
	2024/01/08 21:05:48 Ready to write response ...
	2024/01/08 21:06:15 Ready to marshal response ...
	2024/01/08 21:06:15 Ready to write response ...
	2024/01/08 21:07:33 Ready to marshal response ...
	2024/01/08 21:07:33 Ready to write response ...
	
	
	==> kernel <==
	 21:07:45 up 5 min,  0 users,  load average: 2.20, 2.20, 1.06
	Linux addons-417518 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [7305ca7bc73c5375c487af9c2468e16cce1100ca53d3ec5f3de06982359a6710] <==
	I0108 21:05:10.259256       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.214.178"}
	I0108 21:05:15.210191       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0108 21:05:26.656736       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.249.147"}
	E0108 21:05:35.396320       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.218:8443->10.244.0.28:33236: read: connection reset by peer
	I0108 21:06:02.291747       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0108 21:06:33.263026       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:06:33.263945       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:06:33.283339       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:06:33.283437       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:06:33.307915       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:06:33.308015       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:06:33.318243       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:06:33.318313       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:06:33.323671       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:06:33.323734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:06:33.331507       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:06:33.331681       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:06:33.340435       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:06:33.340498       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 21:06:33.353217       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 21:06:33.356179       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0108 21:06:34.324710       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 21:06:34.341529       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 21:06:34.368386       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 21:07:33.938617       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.50.89"}
	
	
	==> kube-controller-manager [42632a4ef56a169586ebf17132302634eee52b8d17e4d70ff9631eef4ea1ec68] <==
	W0108 21:06:52.534235       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:06:52.534313       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:07:02.492940       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:07:02.493135       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:07:09.502381       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:07:09.502448       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:07:10.007376       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:07:10.007524       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 21:07:10.801787       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:07:10.801903       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 21:07:33.625229       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0108 21:07:33.665849       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-p5nfj"
	I0108 21:07:33.676701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.112568ms"
	I0108 21:07:33.697646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.857694ms"
	I0108 21:07:33.697864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="96.835µs"
	I0108 21:07:33.755503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="68.296µs"
	W0108 21:07:36.589388       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:07:36.589453       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 21:07:36.711908       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 21:07:36.725446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="12.532µs"
	I0108 21:07:36.733597       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0108 21:07:36.887439       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 21:07:36.887494       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 21:07:36.894632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.528918ms"
	I0108 21:07:36.894964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="58.99µs"
	
	
	==> kube-proxy [ffe3c9a9549862c9f5275c0a791ceeb561158545b2e45c26458006cd5f92b1ee] <==
	I0108 21:03:28.992978       1 server_others.go:69] "Using iptables proxy"
	I0108 21:03:30.202981       1 node.go:141] Successfully retrieved node IP: 192.168.39.218
	I0108 21:03:35.948283       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:03:35.948360       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:03:36.056534       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:03:36.056720       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:03:36.097771       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:03:36.097973       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:03:36.111198       1 config.go:188] "Starting service config controller"
	I0108 21:03:36.111325       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:03:36.111449       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:03:36.111521       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:03:36.111589       1 config.go:315] "Starting node config controller"
	I0108 21:03:36.352171       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:03:36.352330       1 shared_informer.go:318] Caches are synced for node config
	I0108 21:03:36.433633       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:03:36.433790       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [ff189d909c657baebf373a534b8f457e4023ac31804da84f4d29364cfd365e98] <==
	W0108 21:03:05.516636       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:03:05.516648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:03:05.516780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:03:05.516791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:03:05.518466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:03:05.518509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:03:05.518661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:03:05.518725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:03:05.518675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:03:05.518527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:03:05.518858       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:03:05.518514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:03:06.496601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:03:06.496701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:03:06.526682       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:03:06.526772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:03:06.617978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:03:06.618146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:03:06.748560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:03:06.748682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 21:03:06.756772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:03:06.756891       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:03:06.878944       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:03:06.879173       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 21:03:09.607285       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:02:37 UTC, ends at Mon 2024-01-08 21:07:45 UTC. --
	Jan 08 21:07:33 addons-417518 kubelet[1253]: I0108 21:07:33.685785    1253 memory_manager.go:346] "RemoveStaleState removing state" podUID="2736db7c-0b61-45e5-9010-c21d6b10319a" containerName="hostpath"
	Jan 08 21:07:33 addons-417518 kubelet[1253]: I0108 21:07:33.685790    1253 memory_manager.go:346] "RemoveStaleState removing state" podUID="2736db7c-0b61-45e5-9010-c21d6b10319a" containerName="node-driver-registrar"
	Jan 08 21:07:33 addons-417518 kubelet[1253]: I0108 21:07:33.867303    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/02b4cc66-9a40-43b0-8094-30ef0299344f-gcp-creds\") pod \"hello-world-app-5d77478584-p5nfj\" (UID: \"02b4cc66-9a40-43b0-8094-30ef0299344f\") " pod="default/hello-world-app-5d77478584-p5nfj"
	Jan 08 21:07:33 addons-417518 kubelet[1253]: I0108 21:07:33.867383    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnkhr\" (UniqueName: \"kubernetes.io/projected/02b4cc66-9a40-43b0-8094-30ef0299344f-kube-api-access-tnkhr\") pod \"hello-world-app-5d77478584-p5nfj\" (UID: \"02b4cc66-9a40-43b0-8094-30ef0299344f\") " pod="default/hello-world-app-5d77478584-p5nfj"
	Jan 08 21:07:35 addons-417518 kubelet[1253]: I0108 21:07:35.277984    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfr5l\" (UniqueName: \"kubernetes.io/projected/b0546a5b-757e-4851-9049-677f5d725202-kube-api-access-bfr5l\") pod \"b0546a5b-757e-4851-9049-677f5d725202\" (UID: \"b0546a5b-757e-4851-9049-677f5d725202\") "
	Jan 08 21:07:35 addons-417518 kubelet[1253]: I0108 21:07:35.280630    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0546a5b-757e-4851-9049-677f5d725202-kube-api-access-bfr5l" (OuterVolumeSpecName: "kube-api-access-bfr5l") pod "b0546a5b-757e-4851-9049-677f5d725202" (UID: "b0546a5b-757e-4851-9049-677f5d725202"). InnerVolumeSpecName "kube-api-access-bfr5l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 21:07:35 addons-417518 kubelet[1253]: I0108 21:07:35.379384    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bfr5l\" (UniqueName: \"kubernetes.io/projected/b0546a5b-757e-4851-9049-677f5d725202-kube-api-access-bfr5l\") on node \"addons-417518\" DevicePath \"\""
	Jan 08 21:07:35 addons-417518 kubelet[1253]: I0108 21:07:35.832661    1253 scope.go:117] "RemoveContainer" containerID="54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433"
	Jan 08 21:07:35 addons-417518 kubelet[1253]: I0108 21:07:35.935290    1253 scope.go:117] "RemoveContainer" containerID="54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433"
	Jan 08 21:07:35 addons-417518 kubelet[1253]: E0108 21:07:35.940726    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433\": container with ID starting with 54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433 not found: ID does not exist" containerID="54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433"
	Jan 08 21:07:35 addons-417518 kubelet[1253]: I0108 21:07:35.940838    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433"} err="failed to get container status \"54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433\": rpc error: code = NotFound desc = could not find container \"54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433\": container with ID starting with 54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433 not found: ID does not exist"
	Jan 08 21:07:36 addons-417518 kubelet[1253]: I0108 21:07:36.718359    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b0546a5b-757e-4851-9049-677f5d725202" path="/var/lib/kubelet/pods/b0546a5b-757e-4851-9049-677f5d725202/volumes"
	Jan 08 21:07:38 addons-417518 kubelet[1253]: I0108 21:07:38.718529    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2817f066-66b4-4b1e-8987-3d3c38fe51f0" path="/var/lib/kubelet/pods/2817f066-66b4-4b1e-8987-3d3c38fe51f0/volumes"
	Jan 08 21:07:38 addons-417518 kubelet[1253]: I0108 21:07:38.719026    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bbcc1e06-b59c-468e-a0ea-32ffcb385093" path="/var/lib/kubelet/pods/bbcc1e06-b59c-468e-a0ea-32ffcb385093/volumes"
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.012461    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7facfa89-b419-46dd-a91e-91872b0a6b71-webhook-cert\") pod \"7facfa89-b419-46dd-a91e-91872b0a6b71\" (UID: \"7facfa89-b419-46dd-a91e-91872b0a6b71\") "
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.012536    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48kp6\" (UniqueName: \"kubernetes.io/projected/7facfa89-b419-46dd-a91e-91872b0a6b71-kube-api-access-48kp6\") pod \"7facfa89-b419-46dd-a91e-91872b0a6b71\" (UID: \"7facfa89-b419-46dd-a91e-91872b0a6b71\") "
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.017832    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7facfa89-b419-46dd-a91e-91872b0a6b71-kube-api-access-48kp6" (OuterVolumeSpecName: "kube-api-access-48kp6") pod "7facfa89-b419-46dd-a91e-91872b0a6b71" (UID: "7facfa89-b419-46dd-a91e-91872b0a6b71"). InnerVolumeSpecName "kube-api-access-48kp6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.018206    1253 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7facfa89-b419-46dd-a91e-91872b0a6b71-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7facfa89-b419-46dd-a91e-91872b0a6b71" (UID: "7facfa89-b419-46dd-a91e-91872b0a6b71"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.113354    1253 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7facfa89-b419-46dd-a91e-91872b0a6b71-webhook-cert\") on node \"addons-417518\" DevicePath \"\""
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.113467    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-48kp6\" (UniqueName: \"kubernetes.io/projected/7facfa89-b419-46dd-a91e-91872b0a6b71-kube-api-access-48kp6\") on node \"addons-417518\" DevicePath \"\""
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.718687    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7facfa89-b419-46dd-a91e-91872b0a6b71" path="/var/lib/kubelet/pods/7facfa89-b419-46dd-a91e-91872b0a6b71/volumes"
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.883518    1253 scope.go:117] "RemoveContainer" containerID="849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d"
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.900610    1253 scope.go:117] "RemoveContainer" containerID="849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d"
	Jan 08 21:07:40 addons-417518 kubelet[1253]: E0108 21:07:40.901348    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d\": container with ID starting with 849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d not found: ID does not exist" containerID="849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d"
	Jan 08 21:07:40 addons-417518 kubelet[1253]: I0108 21:07:40.901434    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d"} err="failed to get container status \"849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d\": rpc error: code = NotFound desc = could not find container \"849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d\": container with ID starting with 849370ac40dfc1e327ffea547b3cfd28208274f70fca042713d2b84afe9be62d not found: ID does not exist"
	
	
	==> storage-provisioner [321f4894fdbe1ea088a3cccca01475c26718bd59e15ba23fdd2e6e1a01a0ca99] <==
	I0108 21:03:45.471776       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:03:45.593410       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:03:45.593470       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:03:45.754981       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:03:45.804134       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1c02a6d-c6f7-4946-bcf1-ca4c92a63044", APIVersion:"v1", ResourceVersion:"879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-417518_373ea2df-c18f-4b7d-b793-bda5f2966a1e became leader
	I0108 21:03:45.811556       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-417518_373ea2df-c18f-4b7d-b793-bda5f2966a1e!
	I0108 21:03:46.112277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-417518_373ea2df-c18f-4b7d-b793-bda5f2966a1e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-417518 -n addons-417518
helpers_test.go:261: (dbg) Run:  kubectl --context addons-417518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.68s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-417518 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-417518 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f523bd7b-bda3-46ea-9582-dce3e21e56cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f523bd7b-bda3-46ea-9582-dce3e21e56cf] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f523bd7b-bda3-46ea-9582-dce3e21e56cf] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00508415s
addons_test.go:891: (dbg) Run:  kubectl --context addons-417518 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 ssh "cat /opt/local-path-provisioner/pvc-2fc9d749-712e-4d0c-8caa-1fbe8b09f623_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-417518 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-417518 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-417518 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (525.598268ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:05:09.075381  343571 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:05:09.075518  343571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:05:09.075531  343571 out.go:309] Setting ErrFile to fd 2...
	I0108 21:05:09.075544  343571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:05:09.075745  343571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:05:09.076024  343571 mustload.go:65] Loading cluster: addons-417518
	I0108 21:05:09.076398  343571 config.go:182] Loaded profile config "addons-417518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:05:09.076425  343571 addons.go:600] checking whether the cluster is paused
	I0108 21:05:09.076515  343571 config.go:182] Loaded profile config "addons-417518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:05:09.076540  343571 host.go:66] Checking if "addons-417518" exists ...
	I0108 21:05:09.076910  343571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:05:09.076963  343571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:05:09.096334  343571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I0108 21:05:09.096884  343571 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:05:09.097619  343571 main.go:141] libmachine: Using API Version  1
	I0108 21:05:09.097654  343571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:05:09.098012  343571 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:05:09.098276  343571 main.go:141] libmachine: (addons-417518) Calling .GetState
	I0108 21:05:09.100044  343571 main.go:141] libmachine: (addons-417518) Calling .DriverName
	I0108 21:05:09.100288  343571 ssh_runner.go:195] Run: systemctl --version
	I0108 21:05:09.100319  343571 main.go:141] libmachine: (addons-417518) Calling .GetSSHHostname
	I0108 21:05:09.103346  343571 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:05:09.103798  343571 main.go:141] libmachine: (addons-417518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:c6:e8", ip: ""} in network mk-addons-417518: {Iface:virbr1 ExpiryTime:2024-01-08 22:02:41 +0000 UTC Type:0 Mac:52:54:00:96:c6:e8 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:addons-417518 Clientid:01:52:54:00:96:c6:e8}
	I0108 21:05:09.103828  343571 main.go:141] libmachine: (addons-417518) DBG | domain addons-417518 has defined IP address 192.168.39.218 and MAC address 52:54:00:96:c6:e8 in network mk-addons-417518
	I0108 21:05:09.104710  343571 main.go:141] libmachine: (addons-417518) Calling .GetSSHPort
	I0108 21:05:09.104916  343571 main.go:141] libmachine: (addons-417518) Calling .GetSSHKeyPath
	I0108 21:05:09.105071  343571 main.go:141] libmachine: (addons-417518) Calling .GetSSHUsername
	I0108 21:05:09.105172  343571 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/addons-417518/id_rsa Username:docker}
	I0108 21:05:09.276310  343571 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:05:09.276409  343571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:05:09.351256  343571 cri.go:89] found id: "dc340f4d005beeb90703da72edbc8850403f622d5a134f6dc2bb9fdfdedadc73"
	I0108 21:05:09.351297  343571 cri.go:89] found id: "ff6803f901abc1ca4cbdb802cbdbf3d9a5bcf849e3f335f82b69a2c394f0d3f5"
	I0108 21:05:09.351304  343571 cri.go:89] found id: "ad295eea80f31edccc8b5b550c937bdf69fd911cb8766db6eec436e932173728"
	I0108 21:05:09.351332  343571 cri.go:89] found id: "43dd529f18b679435f196feba38b3d730fee8a2202ec4c3a2648047edf694380"
	I0108 21:05:09.351338  343571 cri.go:89] found id: "85cbfe955ec3feb9f5f74cf4e38e9caab4d76e335f459a68015b6d18410d6560"
	I0108 21:05:09.351344  343571 cri.go:89] found id: "374f061339707c83fdbae5cc10708d93edd53ee317cbd892ad25cfc2d855f658"
	I0108 21:05:09.351352  343571 cri.go:89] found id: "98355ccde679880454ed83879a8ec85e0fd585cbf878a02c8cdaa06cafde2a1e"
	I0108 21:05:09.351368  343571 cri.go:89] found id: "475404041e407bc669ac9f25f5a310a1ff6bbfa533d9a2ac04ac31bbb7722e88"
	I0108 21:05:09.351373  343571 cri.go:89] found id: "0eafed4ef3630be9e2ee9f269e4d07b427d30c90f68acf3b1740fa5fe41d1a5a"
	I0108 21:05:09.351385  343571 cri.go:89] found id: "35e56c1ea0d7df698f0d2996f9ec1855e99b9cc49d7f00f27912b1ce3c539a4f"
	I0108 21:05:09.351393  343571 cri.go:89] found id: "e3913122b46418649dd05d6660dc829400790923ceedb0aff88342a30dab6302"
	I0108 21:05:09.351398  343571 cri.go:89] found id: "a92add2decc5560a39c9d37cf1bdb69e6b181dc70c37736d14ecfe776bead629"
	I0108 21:05:09.351406  343571 cri.go:89] found id: "1e0afffaeeb010e2a437b2d3147297a2f7e8265073c85410a24c669b563856b6"
	I0108 21:05:09.351415  343571 cri.go:89] found id: "db44b172e0b79e0fde2a99a8c5b1065b201ccc467b13b45063f8ffce1a5d1c27"
	I0108 21:05:09.351424  343571 cri.go:89] found id: "54f95c593bcc031d66cae5dd919386058468bee9755a359c0051048bc25f0433"
	I0108 21:05:09.351433  343571 cri.go:89] found id: "321f4894fdbe1ea088a3cccca01475c26718bd59e15ba23fdd2e6e1a01a0ca99"
	I0108 21:05:09.351442  343571 cri.go:89] found id: "96a45cf847174a1f42aacb82d3e8eca9e6f1d973df0708c3ea05b00b7d2c4e40"
	I0108 21:05:09.351453  343571 cri.go:89] found id: "ffe3c9a9549862c9f5275c0a791ceeb561158545b2e45c26458006cd5f92b1ee"
	I0108 21:05:09.351462  343571 cri.go:89] found id: "85bc8b6bca0b97544f8fff65b2cd34491c8ab231f43ceb27295beb87671b58de"
	I0108 21:05:09.351476  343571 cri.go:89] found id: "ff189d909c657baebf373a534b8f457e4023ac31804da84f4d29364cfd365e98"
	I0108 21:05:09.351484  343571 cri.go:89] found id: "42632a4ef56a169586ebf17132302634eee52b8d17e4d70ff9631eef4ea1ec68"
	I0108 21:05:09.351493  343571 cri.go:89] found id: "7305ca7bc73c5375c487af9c2468e16cce1100ca53d3ec5f3de06982359a6710"
	I0108 21:05:09.351502  343571 cri.go:89] found id: ""
	I0108 21:05:09.351566  343571 ssh_runner.go:195] Run: sudo runc list -f json
	I0108 21:05:09.517226  343571 main.go:141] libmachine: Making call to close driver server
	I0108 21:05:09.517258  343571 main.go:141] libmachine: (addons-417518) Calling .Close
	I0108 21:05:09.517549  343571 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:05:09.517564  343571 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:05:09.520088  343571 out.go:177] 
	W0108 21:05:09.521854  343571 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-08T21:05:09Z" level=error msg="stat /run/runc/4ed2cab1217912d270a02307654479b6aa2fc3263a87160581cd8cf5a567fefd: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-08T21:05:09Z" level=error msg="stat /run/runc/4ed2cab1217912d270a02307654479b6aa2fc3263a87160581cd8cf5a567fefd: no such file or directory"
	
	W0108 21:05:09.521878  343571 out.go:239] * 
	* 
	W0108 21:05:09.524438  343571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:05:09.525823  343571 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:922: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-417518 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (12.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.5s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-417518
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-417518: exit status 82 (2m1.552072359s)

                                                
                                                
-- stdout --
	* Stopping node "addons-417518"  ...
	* Stopping node "addons-417518"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-417518" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-417518
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-417518: exit status 11 (21.661671572s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.218:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-417518" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-417518
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-417518: exit status 11 (6.143573096s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.218:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-417518" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-417518
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-417518: exit status 11 (6.143755438s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.218:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-417518" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-798925 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-798925 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.907330269s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-798925 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-798925 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [694b3107-b164-4318-9cbe-351b3c7e9917] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [694b3107-b164-4318-9cbe-351b3c7e9917] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.003707383s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-798925 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0108 21:19:44.964346  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:44.969679  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:44.979984  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:45.000302  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:45.040620  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:45.121054  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:45.281499  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:45.602135  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:46.243139  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:47.523753  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:50.084759  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:55.205587  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:19:56.855405  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:20:05.446205  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-798925 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.559002806s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-798925 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-798925 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.193
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-798925 addons disable ingress-dns --alsologtostderr -v=1
E0108 21:20:24.784145  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:20:25.926583  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-798925 addons disable ingress-dns --alsologtostderr -v=1: (11.514788286s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-798925 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-798925 addons disable ingress --alsologtostderr -v=1: (7.565940179s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-798925 -n ingress-addon-legacy-798925
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-798925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-798925 logs -n 25: (1.195831583s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-848083 image load                                              | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-848083 ssh findmnt                                             | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | -T /mount1                                                                |                             |         |         |                     |                     |
	| ssh            | functional-848083 ssh findmnt                                             | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | -T /mount2                                                                |                             |         |         |                     |                     |
	| ssh            | functional-848083 ssh findmnt                                             | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | -T /mount3                                                                |                             |         |         |                     |                     |
	| mount          | -p functional-848083                                                      | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC |                     |
	|                | --kill=true                                                               |                             |         |         |                     |                     |
	| image          | functional-848083 image ls                                                | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| update-context | functional-848083                                                         | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC |                     |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-848083                                                         | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-848083 image save --daemon                                     | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-848083                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| update-context | functional-848083                                                         | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-848083                                                         | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-848083                                                         | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-848083                                                         | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-848083                                                         | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-848083 ssh pgrep                                               | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-848083 image build -t                                          | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|                | localhost/my-image:functional-848083                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-848083 image ls                                                | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| delete         | -p functional-848083                                                      | functional-848083           | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| start          | -p ingress-addon-legacy-798925                                            | ingress-addon-legacy-798925 | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:17 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-798925                                               | ingress-addon-legacy-798925 | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:17 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-798925                                               | ingress-addon-legacy-798925 | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:17 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-798925                                               | ingress-addon-legacy-798925 | jenkins | v1.32.0 | 08 Jan 24 21:18 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-798925 ip                                            | ingress-addon-legacy-798925 | jenkins | v1.32.0 | 08 Jan 24 21:20 UTC | 08 Jan 24 21:20 UTC |
	| addons         | ingress-addon-legacy-798925                                               | ingress-addon-legacy-798925 | jenkins | v1.32.0 | 08 Jan 24 21:20 UTC | 08 Jan 24 21:20 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-798925                                               | ingress-addon-legacy-798925 | jenkins | v1.32.0 | 08 Jan 24 21:20 UTC | 08 Jan 24 21:20 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:15:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:15:40.084622  350745 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:15:40.084921  350745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:15:40.084931  350745 out.go:309] Setting ErrFile to fd 2...
	I0108 21:15:40.084939  350745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:15:40.085148  350745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:15:40.085798  350745 out.go:303] Setting JSON to false
	I0108 21:15:40.086808  350745 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7066,"bootTime":1704741474,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:15:40.086888  350745 start.go:138] virtualization: kvm guest
	I0108 21:15:40.089217  350745 out.go:177] * [ingress-addon-legacy-798925] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:15:40.090721  350745 notify.go:220] Checking for updates...
	I0108 21:15:40.090727  350745 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:15:40.092411  350745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:15:40.093848  350745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:15:40.095113  350745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:15:40.096470  350745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:15:40.097813  350745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:15:40.099424  350745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:15:40.133843  350745 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:15:40.135335  350745 start.go:298] selected driver: kvm2
	I0108 21:15:40.135350  350745 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:15:40.135402  350745 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:15:40.136128  350745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:15:40.136233  350745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:15:40.150806  350745 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:15:40.150923  350745 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:15:40.151192  350745 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:15:40.151279  350745 cni.go:84] Creating CNI manager for ""
	I0108 21:15:40.151296  350745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:15:40.151310  350745 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:15:40.151323  350745 start_flags.go:321] config:
	{Name:ingress-addon-legacy-798925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-798925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:15:40.151556  350745 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:15:40.153849  350745 out.go:177] * Starting control plane node ingress-addon-legacy-798925 in cluster ingress-addon-legacy-798925
	I0108 21:15:40.155309  350745 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 21:15:40.188354  350745 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 21:15:40.188394  350745 cache.go:56] Caching tarball of preloaded images
	I0108 21:15:40.188580  350745 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 21:15:40.190695  350745 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 21:15:40.192087  350745 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:15:40.226023  350745 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 21:15:44.297938  350745 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:15:44.298064  350745 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:15:45.307753  350745 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0108 21:15:45.308180  350745 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/config.json ...
	I0108 21:15:45.308216  350745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/config.json: {Name:mk99ba04ecdc9137235f84f7183700336ad2f64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:15:45.308417  350745 start.go:365] acquiring machines lock for ingress-addon-legacy-798925: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:15:45.308451  350745 start.go:369] acquired machines lock for "ingress-addon-legacy-798925" in 18.713µs
	I0108 21:15:45.308470  350745 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-798925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-798925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:15:45.308539  350745 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 21:15:45.310842  350745 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0108 21:15:45.311032  350745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:15:45.311078  350745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:45.325632  350745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0108 21:15:45.326087  350745 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:45.326751  350745 main.go:141] libmachine: Using API Version  1
	I0108 21:15:45.326778  350745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:45.327113  350745 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:45.327311  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetMachineName
	I0108 21:15:45.327523  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:15:45.327673  350745 start.go:159] libmachine.API.Create for "ingress-addon-legacy-798925" (driver="kvm2")
	I0108 21:15:45.327701  350745 client.go:168] LocalClient.Create starting
	I0108 21:15:45.327739  350745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 21:15:45.327778  350745 main.go:141] libmachine: Decoding PEM data...
	I0108 21:15:45.327795  350745 main.go:141] libmachine: Parsing certificate...
	I0108 21:15:45.327850  350745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 21:15:45.327870  350745 main.go:141] libmachine: Decoding PEM data...
	I0108 21:15:45.327881  350745 main.go:141] libmachine: Parsing certificate...
	I0108 21:15:45.327899  350745 main.go:141] libmachine: Running pre-create checks...
	I0108 21:15:45.327908  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .PreCreateCheck
	I0108 21:15:45.328241  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetConfigRaw
	I0108 21:15:45.328671  350745 main.go:141] libmachine: Creating machine...
	I0108 21:15:45.328685  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .Create
	I0108 21:15:45.328846  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Creating KVM machine...
	I0108 21:15:45.330110  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found existing default KVM network
	I0108 21:15:45.330840  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:45.330707  350780 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a20}
	I0108 21:15:45.336358  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | trying to create private KVM network mk-ingress-addon-legacy-798925 192.168.39.0/24...
	I0108 21:15:45.406409  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | private KVM network mk-ingress-addon-legacy-798925 192.168.39.0/24 created
	I0108 21:15:45.406442  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:45.406387  350780 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:15:45.406456  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925 ...
	I0108 21:15:45.406475  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 21:15:45.406575  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 21:15:45.648276  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:45.648134  350780 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa...
	I0108 21:15:45.773781  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:45.773613  350780 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/ingress-addon-legacy-798925.rawdisk...
	I0108 21:15:45.773825  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Writing magic tar header
	I0108 21:15:45.773914  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Writing SSH key tar header
	I0108 21:15:45.773959  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925 (perms=drwx------)
	I0108 21:15:45.773977  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:45.773742  350780 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925 ...
	I0108 21:15:45.773995  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925
	I0108 21:15:45.774012  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 21:15:45.774030  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:15:45.774048  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 21:15:45.774060  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 21:15:45.774071  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 21:15:45.774081  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 21:15:45.774091  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 21:15:45.774100  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 21:15:45.774113  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Creating domain...
	I0108 21:15:45.774122  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 21:15:45.774134  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Checking permissions on dir: /home/jenkins
	I0108 21:15:45.774146  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Checking permissions on dir: /home
	I0108 21:15:45.774154  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Skipping /home - not owner
	I0108 21:15:45.775270  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) define libvirt domain using xml: 
	I0108 21:15:45.775295  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) <domain type='kvm'>
	I0108 21:15:45.775310  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   <name>ingress-addon-legacy-798925</name>
	I0108 21:15:45.775321  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   <memory unit='MiB'>4096</memory>
	I0108 21:15:45.775335  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   <vcpu>2</vcpu>
	I0108 21:15:45.775345  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   <features>
	I0108 21:15:45.775369  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <acpi/>
	I0108 21:15:45.775387  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <apic/>
	I0108 21:15:45.775402  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <pae/>
	I0108 21:15:45.775412  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     
	I0108 21:15:45.775425  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   </features>
	I0108 21:15:45.775438  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   <cpu mode='host-passthrough'>
	I0108 21:15:45.775449  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   
	I0108 21:15:45.775464  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   </cpu>
	I0108 21:15:45.775478  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   <os>
	I0108 21:15:45.775494  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <type>hvm</type>
	I0108 21:15:45.775508  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <boot dev='cdrom'/>
	I0108 21:15:45.775521  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <boot dev='hd'/>
	I0108 21:15:45.775541  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <bootmenu enable='no'/>
	I0108 21:15:45.775562  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   </os>
	I0108 21:15:45.775569  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   <devices>
	I0108 21:15:45.775583  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <disk type='file' device='cdrom'>
	I0108 21:15:45.775600  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/boot2docker.iso'/>
	I0108 21:15:45.775623  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <target dev='hdc' bus='scsi'/>
	I0108 21:15:45.775634  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <readonly/>
	I0108 21:15:45.775643  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     </disk>
	I0108 21:15:45.775656  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <disk type='file' device='disk'>
	I0108 21:15:45.775671  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 21:15:45.775688  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/ingress-addon-legacy-798925.rawdisk'/>
	I0108 21:15:45.775701  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <target dev='hda' bus='virtio'/>
	I0108 21:15:45.775738  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     </disk>
	I0108 21:15:45.775765  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <interface type='network'>
	I0108 21:15:45.775782  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <source network='mk-ingress-addon-legacy-798925'/>
	I0108 21:15:45.775797  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <model type='virtio'/>
	I0108 21:15:45.775811  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     </interface>
	I0108 21:15:45.775822  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <interface type='network'>
	I0108 21:15:45.775837  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <source network='default'/>
	I0108 21:15:45.775855  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <model type='virtio'/>
	I0108 21:15:45.775871  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     </interface>
	I0108 21:15:45.775884  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <serial type='pty'>
	I0108 21:15:45.775900  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <target port='0'/>
	I0108 21:15:45.775913  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     </serial>
	I0108 21:15:45.775931  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <console type='pty'>
	I0108 21:15:45.775951  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <target type='serial' port='0'/>
	I0108 21:15:45.775965  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     </console>
	I0108 21:15:45.775975  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     <rng model='virtio'>
	I0108 21:15:45.775997  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)       <backend model='random'>/dev/random</backend>
	I0108 21:15:45.776012  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     </rng>
	I0108 21:15:45.776020  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     
	I0108 21:15:45.776029  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)     
	I0108 21:15:45.776036  350745 main.go:141] libmachine: (ingress-addon-legacy-798925)   </devices>
	I0108 21:15:45.776043  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) </domain>
	I0108 21:15:45.776052  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) 
	I0108 21:15:45.780542  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:c1:6d:49 in network default
	I0108 21:15:45.781151  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Ensuring networks are active...
	I0108 21:15:45.781174  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:45.781754  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Ensuring network default is active
	I0108 21:15:45.782070  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Ensuring network mk-ingress-addon-legacy-798925 is active
	I0108 21:15:45.782611  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Getting domain xml...
	I0108 21:15:45.783257  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Creating domain...
	I0108 21:15:47.008800  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Waiting to get IP...
	I0108 21:15:47.009650  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:47.010011  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:47.010043  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:47.009974  350780 retry.go:31] will retry after 293.076819ms: waiting for machine to come up
	I0108 21:15:47.304586  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:47.305079  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:47.305108  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:47.305042  350780 retry.go:31] will retry after 309.202705ms: waiting for machine to come up
	I0108 21:15:47.615818  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:47.616279  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:47.616313  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:47.616223  350780 retry.go:31] will retry after 424.093126ms: waiting for machine to come up
	I0108 21:15:48.041608  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:48.042101  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:48.042135  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:48.042045  350780 retry.go:31] will retry after 401.247381ms: waiting for machine to come up
	I0108 21:15:48.444602  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:48.444963  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:48.445005  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:48.444925  350780 retry.go:31] will retry after 690.021964ms: waiting for machine to come up
	I0108 21:15:49.136879  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:49.137258  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:49.137298  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:49.137199  350780 retry.go:31] will retry after 797.272144ms: waiting for machine to come up
	I0108 21:15:49.936190  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:49.936595  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:49.936630  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:49.936542  350780 retry.go:31] will retry after 1.127386312s: waiting for machine to come up
	I0108 21:15:51.065791  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:51.066103  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:51.066135  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:51.066049  350780 retry.go:31] will retry after 1.322103561s: waiting for machine to come up
	I0108 21:15:52.391858  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:52.392371  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:52.392403  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:52.392311  350780 retry.go:31] will retry after 1.334681841s: waiting for machine to come up
	I0108 21:15:53.728488  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:53.728884  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:53.728944  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:53.728848  350780 retry.go:31] will retry after 1.535016556s: waiting for machine to come up
	I0108 21:15:55.265624  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:55.266070  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:55.266104  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:55.266012  350780 retry.go:31] will retry after 2.10287531s: waiting for machine to come up
	I0108 21:15:57.371813  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:15:57.372325  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:15:57.372357  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:15:57.372281  350780 retry.go:31] will retry after 2.921674771s: waiting for machine to come up
	I0108 21:16:00.297270  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:00.297693  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:16:00.297720  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:16:00.297645  350780 retry.go:31] will retry after 4.247022629s: waiting for machine to come up
	I0108 21:16:04.547333  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:04.547785  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find current IP address of domain ingress-addon-legacy-798925 in network mk-ingress-addon-legacy-798925
	I0108 21:16:04.547811  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | I0108 21:16:04.547715  350780 retry.go:31] will retry after 4.962035271s: waiting for machine to come up
	I0108 21:16:09.514355  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.514778  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Found IP for machine: 192.168.39.193
	I0108 21:16:09.514803  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Reserving static IP address...
	I0108 21:16:09.514854  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has current primary IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.515136  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-798925", mac: "52:54:00:2b:46:6d", ip: "192.168.39.193"} in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.586785  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Getting to WaitForSSH function...
	I0108 21:16:09.586830  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Reserved static IP address: 192.168.39.193
	I0108 21:16:09.586844  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Waiting for SSH to be available...
	I0108 21:16:09.589466  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.589897  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:09.589942  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.590039  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Using SSH client type: external
	I0108 21:16:09.590061  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa (-rw-------)
	I0108 21:16:09.590097  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:16:09.590113  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | About to run SSH command:
	I0108 21:16:09.590134  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | exit 0
	I0108 21:16:09.675532  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | SSH cmd err, output: <nil>: 
	I0108 21:16:09.675841  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) KVM machine creation complete!
	I0108 21:16:09.676198  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetConfigRaw
	I0108 21:16:09.676735  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:09.676943  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:09.677080  350745 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 21:16:09.677100  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetState
	I0108 21:16:09.678430  350745 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 21:16:09.678449  350745 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 21:16:09.678455  350745 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 21:16:09.678465  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:09.680685  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.681084  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:09.681116  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.681264  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:09.681464  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:09.681624  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:09.681749  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:09.681922  350745 main.go:141] libmachine: Using SSH client type: native
	I0108 21:16:09.682553  350745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0108 21:16:09.682579  350745 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 21:16:09.794634  350745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:16:09.794666  350745 main.go:141] libmachine: Detecting the provisioner...
	I0108 21:16:09.794675  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:09.797295  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.797670  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:09.797707  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.797840  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:09.798063  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:09.798222  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:09.798372  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:09.798514  350745 main.go:141] libmachine: Using SSH client type: native
	I0108 21:16:09.798840  350745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0108 21:16:09.798854  350745 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 21:16:09.912280  350745 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 21:16:09.912346  350745 main.go:141] libmachine: found compatible host: buildroot
	I0108 21:16:09.912354  350745 main.go:141] libmachine: Provisioning with buildroot...
	I0108 21:16:09.912365  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetMachineName
	I0108 21:16:09.912673  350745 buildroot.go:166] provisioning hostname "ingress-addon-legacy-798925"
	I0108 21:16:09.912708  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetMachineName
	I0108 21:16:09.912913  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:09.915469  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.915813  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:09.915848  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:09.916019  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:09.916196  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:09.916350  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:09.916508  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:09.916677  350745 main.go:141] libmachine: Using SSH client type: native
	I0108 21:16:09.917019  350745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0108 21:16:09.917034  350745 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-798925 && echo "ingress-addon-legacy-798925" | sudo tee /etc/hostname
	I0108 21:16:10.039657  350745 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-798925
	
	I0108 21:16:10.039693  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:10.042328  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.042684  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.042717  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.042905  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:10.043086  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:10.043242  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:10.043408  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:10.043565  350745 main.go:141] libmachine: Using SSH client type: native
	I0108 21:16:10.043887  350745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0108 21:16:10.043908  350745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-798925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-798925/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-798925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:16:10.159819  350745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:16:10.159865  350745 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 21:16:10.159915  350745 buildroot.go:174] setting up certificates
	I0108 21:16:10.159925  350745 provision.go:83] configureAuth start
	I0108 21:16:10.159937  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetMachineName
	I0108 21:16:10.160239  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetIP
	I0108 21:16:10.162701  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.163210  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.163240  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.163402  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:10.165674  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.166002  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.166034  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.166144  350745 provision.go:138] copyHostCerts
	I0108 21:16:10.166180  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:16:10.166227  350745 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 21:16:10.166242  350745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:16:10.166313  350745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 21:16:10.166393  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:16:10.166411  350745 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 21:16:10.166416  350745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:16:10.166439  350745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 21:16:10.166483  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:16:10.166504  350745 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 21:16:10.166510  350745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:16:10.166530  350745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 21:16:10.166572  350745 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-798925 san=[192.168.39.193 192.168.39.193 localhost 127.0.0.1 minikube ingress-addon-legacy-798925]
	I0108 21:16:10.373930  350745 provision.go:172] copyRemoteCerts
	I0108 21:16:10.373995  350745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:16:10.374022  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:10.376864  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.377177  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.377199  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.377406  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:10.377631  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:10.377799  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:10.377899  350745 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa Username:docker}
	I0108 21:16:10.460267  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:16:10.460341  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:16:10.482715  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:16:10.482775  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:16:10.504899  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:16:10.504984  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:16:10.526919  350745 provision.go:86] duration metric: configureAuth took 366.97636ms
	I0108 21:16:10.526958  350745 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:16:10.527167  350745 config.go:182] Loaded profile config "ingress-addon-legacy-798925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 21:16:10.527301  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:10.529657  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.530051  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.530085  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.530263  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:10.530471  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:10.530656  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:10.530821  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:10.531007  350745 main.go:141] libmachine: Using SSH client type: native
	I0108 21:16:10.531515  350745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0108 21:16:10.531542  350745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:16:10.847087  350745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:16:10.847127  350745 main.go:141] libmachine: Checking connection to Docker...
	I0108 21:16:10.847142  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetURL
	I0108 21:16:10.848436  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Using libvirt version 6000000
	I0108 21:16:10.850714  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.851040  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.851072  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.851222  350745 main.go:141] libmachine: Docker is up and running!
	I0108 21:16:10.851240  350745 main.go:141] libmachine: Reticulating splines...
	I0108 21:16:10.851248  350745 client.go:171] LocalClient.Create took 25.523539013s
	I0108 21:16:10.851271  350745 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-798925" took 25.523598842s
	I0108 21:16:10.851282  350745 start.go:300] post-start starting for "ingress-addon-legacy-798925" (driver="kvm2")
	I0108 21:16:10.851291  350745 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:16:10.851309  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:10.851593  350745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:16:10.851634  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:10.854258  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.854565  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.854596  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.854753  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:10.854961  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:10.855128  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:10.855250  350745 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa Username:docker}
	I0108 21:16:10.941509  350745 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:16:10.945812  350745 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:16:10.945837  350745 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 21:16:10.945917  350745 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 21:16:10.946096  350745 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 21:16:10.946116  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /etc/ssl/certs/3419822.pem
	I0108 21:16:10.946252  350745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:16:10.955216  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:16:10.976169  350745 start.go:303] post-start completed in 124.875172ms
	I0108 21:16:10.976239  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetConfigRaw
	I0108 21:16:10.976819  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetIP
	I0108 21:16:10.979694  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.980135  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.980168  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.980382  350745 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/config.json ...
	I0108 21:16:10.980553  350745 start.go:128] duration metric: createHost completed in 25.672004738s
	I0108 21:16:10.980577  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:10.982805  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.983119  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:10.983140  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:10.983291  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:10.983487  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:10.983673  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:10.983815  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:10.983957  350745 main.go:141] libmachine: Using SSH client type: native
	I0108 21:16:10.984416  350745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0108 21:16:10.984436  350745 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:16:11.096045  350745 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748571.079747180
	
	I0108 21:16:11.096071  350745 fix.go:206] guest clock: 1704748571.079747180
	I0108 21:16:11.096078  350745 fix.go:219] Guest: 2024-01-08 21:16:11.07974718 +0000 UTC Remote: 2024-01-08 21:16:10.980564279 +0000 UTC m=+30.953124768 (delta=99.182901ms)
	I0108 21:16:11.096113  350745 fix.go:190] guest clock delta is within tolerance: 99.182901ms
	I0108 21:16:11.096118  350745 start.go:83] releasing machines lock for "ingress-addon-legacy-798925", held for 25.787657029s
	I0108 21:16:11.096139  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:11.096420  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetIP
	I0108 21:16:11.099204  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:11.099610  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:11.099655  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:11.099815  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:11.100332  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:11.100556  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:11.100647  350745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:16:11.100697  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:11.100820  350745 ssh_runner.go:195] Run: cat /version.json
	I0108 21:16:11.100856  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:11.103413  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:11.103660  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:11.103790  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:11.103821  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:11.103952  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:11.104150  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:11.104136  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:11.104217  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:11.104396  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:11.104489  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:11.104595  350745 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa Username:docker}
	I0108 21:16:11.104663  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:11.104787  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:11.104923  350745 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa Username:docker}
	I0108 21:16:11.208785  350745 ssh_runner.go:195] Run: systemctl --version
	I0108 21:16:11.214538  350745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:16:11.371824  350745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:16:11.377917  350745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:16:11.377994  350745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:16:11.391585  350745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:16:11.391611  350745 start.go:475] detecting cgroup driver to use...
	I0108 21:16:11.391675  350745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:16:11.407616  350745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:16:11.420277  350745 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:16:11.420342  350745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:16:11.433605  350745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:16:11.447338  350745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:16:11.556798  350745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:16:11.675918  350745 docker.go:219] disabling docker service ...
	I0108 21:16:11.676016  350745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:16:11.689653  350745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:16:11.701337  350745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:16:11.812305  350745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:16:11.934586  350745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:16:11.947612  350745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:16:11.964962  350745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 21:16:11.965058  350745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:16:11.975078  350745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:16:11.975157  350745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:16:11.985155  350745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:16:11.996584  350745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:16:12.008368  350745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:16:12.019140  350745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:16:12.028266  350745 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:16:12.028346  350745 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 21:16:12.041052  350745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:16:12.050043  350745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:16:12.171306  350745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:16:12.447327  350745 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:16:12.447445  350745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:16:12.452485  350745 start.go:543] Will wait 60s for crictl version
	I0108 21:16:12.452548  350745 ssh_runner.go:195] Run: which crictl
	I0108 21:16:12.456923  350745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:16:12.497684  350745 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:16:12.497810  350745 ssh_runner.go:195] Run: crio --version
	I0108 21:16:12.550030  350745 ssh_runner.go:195] Run: crio --version
	I0108 21:16:12.678030  350745 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0108 21:16:12.741510  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetIP
	I0108 21:16:12.744484  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:12.744835  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:12.744868  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:12.745106  350745 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:16:12.749697  350745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:16:12.762312  350745 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 21:16:12.762365  350745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:16:12.797433  350745 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 21:16:12.797515  350745 ssh_runner.go:195] Run: which lz4
	I0108 21:16:12.801520  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 21:16:12.801628  350745 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:16:12.805865  350745 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:16:12.805896  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0108 21:16:14.729706  350745 crio.go:444] Took 1.928107 seconds to copy over tarball
	I0108 21:16:14.729786  350745 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:16:17.641905  350745 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.912073147s)
	I0108 21:16:17.641940  350745 crio.go:451] Took 2.912204 seconds to extract the tarball
	I0108 21:16:17.641952  350745 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:16:17.684413  350745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:16:17.741512  350745 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 21:16:17.741551  350745 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 21:16:17.741664  350745 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:16:17.741635  350745 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:16:17.741738  350745 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:16:17.741765  350745 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:16:17.741761  350745 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:16:17.741746  350745 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 21:16:17.741633  350745 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:16:17.742218  350745 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 21:16:17.743158  350745 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 21:16:17.743192  350745 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:16:17.743172  350745 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:16:17.743164  350745 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:16:17.743163  350745 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:16:17.743153  350745 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 21:16:17.743161  350745 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:16:17.743474  350745 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:16:17.960797  350745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0108 21:16:17.962825  350745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0108 21:16:17.965978  350745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:16:17.971002  350745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0108 21:16:17.974319  350745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:16:17.977211  350745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:16:18.006330  350745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:16:18.069983  350745 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0108 21:16:18.070037  350745 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 21:16:18.070089  350745 ssh_runner.go:195] Run: which crictl
	I0108 21:16:18.110081  350745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:16:18.122244  350745 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0108 21:16:18.122302  350745 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:16:18.122357  350745 ssh_runner.go:195] Run: which crictl
	I0108 21:16:18.252294  350745 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0108 21:16:18.252329  350745 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0108 21:16:18.252342  350745 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 21:16:18.252362  350745 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0108 21:16:18.252366  350745 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:16:18.252385  350745 ssh_runner.go:195] Run: which crictl
	I0108 21:16:18.252389  350745 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:16:18.252435  350745 ssh_runner.go:195] Run: which crictl
	I0108 21:16:18.252440  350745 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0108 21:16:18.252472  350745 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:16:18.252438  350745 ssh_runner.go:195] Run: which crictl
	I0108 21:16:18.252507  350745 ssh_runner.go:195] Run: which crictl
	I0108 21:16:18.252529  350745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 21:16:18.252568  350745 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0108 21:16:18.252595  350745 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:16:18.252619  350745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 21:16:18.252622  350745 ssh_runner.go:195] Run: which crictl
	I0108 21:16:18.274256  350745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 21:16:18.274320  350745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 21:16:18.274367  350745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 21:16:18.317507  350745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 21:16:18.317520  350745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 21:16:18.317742  350745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0108 21:16:18.365673  350745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 21:16:18.397112  350745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 21:16:18.407675  350745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 21:16:18.407675  350745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 21:16:18.429076  350745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0108 21:16:18.432454  350745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 21:16:18.432514  350745 cache_images.go:92] LoadImages completed in 690.941071ms
	W0108 21:16:18.432600  350745 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0108 21:16:18.432675  350745 ssh_runner.go:195] Run: crio config
	I0108 21:16:18.492740  350745 cni.go:84] Creating CNI manager for ""
	I0108 21:16:18.492770  350745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:16:18.492791  350745 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:16:18.492826  350745 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-798925 NodeName:ingress-addon-legacy-798925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 21:16:18.492993  350745 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-798925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:16:18.493105  350745 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-798925 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-798925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:16:18.493187  350745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 21:16:18.502804  350745 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:16:18.502873  350745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:16:18.511823  350745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0108 21:16:18.528168  350745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 21:16:18.543207  350745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0108 21:16:18.558350  350745 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I0108 21:16:18.561908  350745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:16:18.573678  350745 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925 for IP: 192.168.39.193
	I0108 21:16:18.573737  350745 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:18.573926  350745 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 21:16:18.573985  350745 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 21:16:18.574068  350745 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.key
	I0108 21:16:18.574088  350745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt with IP's: []
	I0108 21:16:18.785224  350745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt ...
	I0108 21:16:18.785263  350745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: {Name:mk3cb8425d349f61d392a5a9bcaa84368a634c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:18.785473  350745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.key ...
	I0108 21:16:18.785497  350745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.key: {Name:mk6f557e77b95fbdb7e864598c7846f8d0cb9f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:18.785603  350745 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.key.20e25ccb
	I0108 21:16:18.785627  350745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.crt.20e25ccb with IP's: [192.168.39.193 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:16:18.845083  350745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.crt.20e25ccb ...
	I0108 21:16:18.845121  350745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.crt.20e25ccb: {Name:mk8cd974f71468dc7afc20b3b74d7cd85e304f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:18.845301  350745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.key.20e25ccb ...
	I0108 21:16:18.845324  350745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.key.20e25ccb: {Name:mkf41a98576310b54d365f62e9da80926edce22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:18.845424  350745 certs.go:337] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.crt.20e25ccb -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.crt
	I0108 21:16:18.845532  350745 certs.go:341] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.key.20e25ccb -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.key
	I0108 21:16:18.845619  350745 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.key
	I0108 21:16:18.845639  350745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.crt with IP's: []
	I0108 21:16:19.056082  350745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.crt ...
	I0108 21:16:19.056121  350745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.crt: {Name:mk2b18b7a9266c21225a08b4b686f336400944c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:19.056290  350745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.key ...
	I0108 21:16:19.056304  350745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.key: {Name:mk4dcee0b3b83bd5ac50ba06ed715b281ce3f0bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:19.056369  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 21:16:19.056390  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 21:16:19.056400  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 21:16:19.056410  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 21:16:19.056421  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:16:19.056439  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:16:19.056450  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:16:19.056462  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:16:19.056520  350745 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 21:16:19.056558  350745 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 21:16:19.056568  350745 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:16:19.056590  350745 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:16:19.056614  350745 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:16:19.056639  350745 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 21:16:19.056682  350745 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:16:19.056725  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /usr/share/ca-certificates/3419822.pem
	I0108 21:16:19.056758  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:16:19.056774  350745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem -> /usr/share/ca-certificates/341982.pem
	I0108 21:16:19.057411  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:16:19.082949  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:16:19.104783  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:16:19.126218  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:16:19.149505  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:16:19.173571  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:16:19.197314  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:16:19.219852  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:16:19.241473  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 21:16:19.263353  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:16:19.284794  350745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 21:16:19.306979  350745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:16:19.322659  350745 ssh_runner.go:195] Run: openssl version
	I0108 21:16:19.328181  350745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 21:16:19.338564  350745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 21:16:19.343310  350745 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:16:19.343392  350745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 21:16:19.349504  350745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:16:19.359944  350745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:16:19.370446  350745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:16:19.375217  350745 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:16:19.375278  350745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:16:19.380602  350745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:16:19.391050  350745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 21:16:19.401389  350745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 21:16:19.405976  350745 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:16:19.406094  350745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 21:16:19.412141  350745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 21:16:19.422444  350745 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:16:19.426669  350745 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:16:19.426719  350745 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-798925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-798925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:16:19.426815  350745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:16:19.426869  350745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:16:19.465082  350745 cri.go:89] found id: ""
	I0108 21:16:19.465159  350745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:16:19.474634  350745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:16:19.483803  350745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:16:19.492974  350745 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:16:19.493019  350745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0108 21:16:19.551448  350745 kubeadm.go:322] W0108 21:16:19.544502     968 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 21:16:19.688300  350745 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:16:21.762085  350745 kubeadm.go:322] W0108 21:16:21.756445     968 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 21:16:21.763333  350745 kubeadm.go:322] W0108 21:16:21.757607     968 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 21:16:32.265668  350745 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 21:16:32.265727  350745 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:16:32.265857  350745 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:16:32.265990  350745 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:16:32.266086  350745 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:16:32.266174  350745 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:16:32.266332  350745 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:16:32.266394  350745 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:16:32.266507  350745 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:16:32.268138  350745 out.go:204]   - Generating certificates and keys ...
	I0108 21:16:32.268242  350745 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:16:32.268328  350745 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:16:32.268436  350745 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:16:32.268529  350745 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:16:32.268605  350745 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:16:32.268674  350745 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:16:32.268748  350745 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:16:32.268873  350745 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-798925 localhost] and IPs [192.168.39.193 127.0.0.1 ::1]
	I0108 21:16:32.268926  350745 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:16:32.269057  350745 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-798925 localhost] and IPs [192.168.39.193 127.0.0.1 ::1]
	I0108 21:16:32.269146  350745 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:16:32.269242  350745 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:16:32.269308  350745 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:16:32.269391  350745 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:16:32.269473  350745 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:16:32.269556  350745 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:16:32.269626  350745 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:16:32.269672  350745 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:16:32.269729  350745 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:16:32.271196  350745 out.go:204]   - Booting up control plane ...
	I0108 21:16:32.271281  350745 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:16:32.271344  350745 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:16:32.271479  350745 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:16:32.271556  350745 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:16:32.271678  350745 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:16:32.271745  350745 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003541 seconds
	I0108 21:16:32.271841  350745 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:16:32.271952  350745 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:16:32.272004  350745 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:16:32.272167  350745 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-798925 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:16:32.272241  350745 kubeadm.go:322] [bootstrap-token] Using token: k30ksb.vkicvzir6nw6hxyy
	I0108 21:16:32.273552  350745 out.go:204]   - Configuring RBAC rules ...
	I0108 21:16:32.273659  350745 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:16:32.273735  350745 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:16:32.273861  350745 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:16:32.273976  350745 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:16:32.274140  350745 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:16:32.274252  350745 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:16:32.274395  350745 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:16:32.274434  350745 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:16:32.274484  350745 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:16:32.274490  350745 kubeadm.go:322] 
	I0108 21:16:32.274546  350745 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:16:32.274552  350745 kubeadm.go:322] 
	I0108 21:16:32.274612  350745 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:16:32.274617  350745 kubeadm.go:322] 
	I0108 21:16:32.274638  350745 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:16:32.274700  350745 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:16:32.274750  350745 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:16:32.274757  350745 kubeadm.go:322] 
	I0108 21:16:32.274804  350745 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:16:32.274891  350745 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:16:32.274952  350745 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:16:32.274957  350745 kubeadm.go:322] 
	I0108 21:16:32.275028  350745 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:16:32.275094  350745 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:16:32.275100  350745 kubeadm.go:322] 
	I0108 21:16:32.275166  350745 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k30ksb.vkicvzir6nw6hxyy \
	I0108 21:16:32.275250  350745 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 21:16:32.275279  350745 kubeadm.go:322]     --control-plane 
	I0108 21:16:32.275287  350745 kubeadm.go:322] 
	I0108 21:16:32.275365  350745 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:16:32.275375  350745 kubeadm.go:322] 
	I0108 21:16:32.275439  350745 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k30ksb.vkicvzir6nw6hxyy \
	I0108 21:16:32.275577  350745 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 21:16:32.275589  350745 cni.go:84] Creating CNI manager for ""
	I0108 21:16:32.275599  350745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:16:32.277156  350745 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:16:32.278363  350745 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:16:32.288599  350745 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:16:32.305554  350745 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:16:32.305641  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:32.305644  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=ingress-addon-legacy-798925 minikube.k8s.io/updated_at=2024_01_08T21_16_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:32.344571  350745 ops.go:34] apiserver oom_adj: -16
	I0108 21:16:32.517165  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:33.017318  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:33.517412  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:34.018137  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:34.517191  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:35.018094  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:35.517999  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:36.017359  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:36.517592  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:37.017391  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:37.517870  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:38.017664  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:38.518041  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:39.018184  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:39.518161  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:40.018129  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:40.518101  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:41.017481  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:41.518035  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:42.017639  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:42.518043  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:43.018044  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:43.518258  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:44.018234  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:44.518081  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:45.017391  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:45.517929  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:46.017620  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:46.517584  350745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:16:46.712739  350745 kubeadm.go:1088] duration metric: took 14.407149623s to wait for elevateKubeSystemPrivileges.
	I0108 21:16:46.712788  350745 kubeadm.go:406] StartCluster complete in 27.286071627s
	I0108 21:16:46.712827  350745 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:46.712915  350745 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:16:46.713784  350745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:16:46.714132  350745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:16:46.714210  350745 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:16:46.714303  350745 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-798925"
	I0108 21:16:46.714331  350745 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-798925"
	I0108 21:16:46.714403  350745 host.go:66] Checking if "ingress-addon-legacy-798925" exists ...
	I0108 21:16:46.714409  350745 config.go:182] Loaded profile config "ingress-addon-legacy-798925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 21:16:46.714330  350745 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-798925"
	I0108 21:16:46.714494  350745 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-798925"
	I0108 21:16:46.714904  350745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:16:46.714917  350745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:16:46.714932  350745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:16:46.714942  350745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:16:46.714869  350745 kapi.go:59] client config for ingress-addon-legacy-798925: &rest.Config{Host:"https://192.168.39.193:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:16:46.715848  350745 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:16:46.729695  350745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
	I0108 21:16:46.729870  350745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0108 21:16:46.730257  350745 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:16:46.730266  350745 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:16:46.730802  350745 main.go:141] libmachine: Using API Version  1
	I0108 21:16:46.730820  350745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:16:46.730953  350745 main.go:141] libmachine: Using API Version  1
	I0108 21:16:46.730986  350745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:16:46.731203  350745 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:16:46.731416  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetState
	I0108 21:16:46.731467  350745 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:16:46.732105  350745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:16:46.732144  350745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:16:46.733891  350745 kapi.go:59] client config for ingress-addon-legacy-798925: &rest.Config{Host:"https://192.168.39.193:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:16:46.734185  350745 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-798925"
	I0108 21:16:46.734234  350745 host.go:66] Checking if "ingress-addon-legacy-798925" exists ...
	I0108 21:16:46.734545  350745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:16:46.734577  350745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:16:46.747536  350745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33345
	I0108 21:16:46.748023  350745 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:16:46.748489  350745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37977
	I0108 21:16:46.748567  350745 main.go:141] libmachine: Using API Version  1
	I0108 21:16:46.748592  350745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:16:46.748873  350745 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:16:46.748926  350745 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:16:46.749116  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetState
	I0108 21:16:46.749385  350745 main.go:141] libmachine: Using API Version  1
	I0108 21:16:46.749407  350745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:16:46.749811  350745 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:16:46.750390  350745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:16:46.750430  350745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:16:46.751034  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:46.753585  350745 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:16:46.755272  350745 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:16:46.755292  350745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:16:46.755309  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:46.759114  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:46.759612  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:46.759648  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:46.759952  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:46.760212  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:46.760434  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:46.760608  350745 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa Username:docker}
	I0108 21:16:46.766947  350745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I0108 21:16:46.767417  350745 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:16:46.767875  350745 main.go:141] libmachine: Using API Version  1
	I0108 21:16:46.767902  350745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:16:46.768333  350745 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:16:46.768502  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetState
	I0108 21:16:46.770201  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .DriverName
	I0108 21:16:46.770489  350745 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:16:46.770508  350745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:16:46.770528  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHHostname
	I0108 21:16:46.773195  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:46.773785  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:6d", ip: ""} in network mk-ingress-addon-legacy-798925: {Iface:virbr1 ExpiryTime:2024-01-08 22:16:01 +0000 UTC Type:0 Mac:52:54:00:2b:46:6d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ingress-addon-legacy-798925 Clientid:01:52:54:00:2b:46:6d}
	I0108 21:16:46.773844  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | domain ingress-addon-legacy-798925 has defined IP address 192.168.39.193 and MAC address 52:54:00:2b:46:6d in network mk-ingress-addon-legacy-798925
	I0108 21:16:46.774079  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHPort
	I0108 21:16:46.774256  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHKeyPath
	I0108 21:16:46.774408  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .GetSSHUsername
	I0108 21:16:46.774566  350745 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/ingress-addon-legacy-798925/id_rsa Username:docker}
	I0108 21:16:46.907439  350745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:16:46.921264  350745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:16:46.949447  350745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:16:47.293137  350745 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-798925" context rescaled to 1 replicas
	I0108 21:16:47.293193  350745 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:16:47.295379  350745 out.go:177] * Verifying Kubernetes components...
	I0108 21:16:47.296953  350745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:16:47.584463  350745 main.go:141] libmachine: Making call to close driver server
	I0108 21:16:47.584497  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .Close
	I0108 21:16:47.584508  350745 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 21:16:47.584593  350745 main.go:141] libmachine: Making call to close driver server
	I0108 21:16:47.584619  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .Close
	I0108 21:16:47.584836  350745 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:16:47.584859  350745 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:16:47.584870  350745 main.go:141] libmachine: Making call to close driver server
	I0108 21:16:47.584879  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .Close
	I0108 21:16:47.584927  350745 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:16:47.584945  350745 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:16:47.584969  350745 main.go:141] libmachine: Making call to close driver server
	I0108 21:16:47.584980  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .Close
	I0108 21:16:47.585067  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Closing plugin on server side
	I0108 21:16:47.585115  350745 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:16:47.585136  350745 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:16:47.585149  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Closing plugin on server side
	I0108 21:16:47.585439  350745 kapi.go:59] client config for ingress-addon-legacy-798925: &rest.Config{Host:"https://192.168.39.193:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:16:47.585772  350745 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-798925" to be "Ready" ...
	I0108 21:16:47.586192  350745 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:16:47.586222  350745 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:16:47.595994  350745 node_ready.go:49] node "ingress-addon-legacy-798925" has status "Ready":"True"
	I0108 21:16:47.596024  350745 node_ready.go:38] duration metric: took 10.21044ms waiting for node "ingress-addon-legacy-798925" to be "Ready" ...
	I0108 21:16:47.596037  350745 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:16:47.619080  350745 main.go:141] libmachine: Making call to close driver server
	I0108 21:16:47.619115  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) Calling .Close
	I0108 21:16:47.619453  350745 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:16:47.619480  350745 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:16:47.619510  350745 main.go:141] libmachine: (ingress-addon-legacy-798925) DBG | Closing plugin on server side
	I0108 21:16:47.621528  350745 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:16:47.619814  350745 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace to be "Ready" ...
	I0108 21:16:47.623657  350745 addons.go:508] enable addons completed in 909.449531ms: enabled=[storage-provisioner default-storageclass]
	I0108 21:16:49.630920  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:51.631723  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:54.131527  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:56.131608  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:16:58.631770  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:00.632225  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:03.131661  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:05.631027  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:08.130940  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:10.131799  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:12.630774  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:14.631182  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:17.132132  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:19.630792  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:21.632307  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:23.633310  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:26.131111  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:28.631881  350745 pod_ready.go:102] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"False"
	I0108 21:17:29.130508  350745 pod_ready.go:92] pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace has status "Ready":"True"
	I0108 21:17:29.130537  350745 pod_ready.go:81] duration metric: took 41.506907349s waiting for pod "coredns-66bff467f8-pfmkn" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.130550  350745 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-798925" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.135292  350745 pod_ready.go:92] pod "etcd-ingress-addon-legacy-798925" in "kube-system" namespace has status "Ready":"True"
	I0108 21:17:29.135316  350745 pod_ready.go:81] duration metric: took 4.75657ms waiting for pod "etcd-ingress-addon-legacy-798925" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.135327  350745 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-798925" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.140323  350745 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-798925" in "kube-system" namespace has status "Ready":"True"
	I0108 21:17:29.140349  350745 pod_ready.go:81] duration metric: took 5.010982ms waiting for pod "kube-apiserver-ingress-addon-legacy-798925" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.140362  350745 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-798925" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.145360  350745 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-798925" in "kube-system" namespace has status "Ready":"True"
	I0108 21:17:29.145401  350745 pod_ready.go:81] duration metric: took 5.030723ms waiting for pod "kube-controller-manager-ingress-addon-legacy-798925" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.145415  350745 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-89s94" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.150865  350745 pod_ready.go:92] pod "kube-proxy-89s94" in "kube-system" namespace has status "Ready":"True"
	I0108 21:17:29.150885  350745 pod_ready.go:81] duration metric: took 5.462231ms waiting for pod "kube-proxy-89s94" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.150896  350745 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-798925" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.325327  350745 request.go:629] Waited for 174.34329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.193:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-798925
	I0108 21:17:29.524715  350745 request.go:629] Waited for 195.409521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.193:8443/api/v1/nodes/ingress-addon-legacy-798925
	I0108 21:17:29.528233  350745 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-798925" in "kube-system" namespace has status "Ready":"True"
	I0108 21:17:29.528304  350745 pod_ready.go:81] duration metric: took 377.388412ms waiting for pod "kube-scheduler-ingress-addon-legacy-798925" in "kube-system" namespace to be "Ready" ...
	I0108 21:17:29.528357  350745 pod_ready.go:38] duration metric: took 41.932285792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:17:29.528411  350745 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:17:29.528481  350745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:17:29.544120  350745 api_server.go:72] duration metric: took 42.25088442s to wait for apiserver process to appear ...
	I0108 21:17:29.544147  350745 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:17:29.544172  350745 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8443/healthz ...
	I0108 21:17:29.549633  350745 api_server.go:279] https://192.168.39.193:8443/healthz returned 200:
	ok
	I0108 21:17:29.550723  350745 api_server.go:141] control plane version: v1.18.20
	I0108 21:17:29.550746  350745 api_server.go:131] duration metric: took 6.592666ms to wait for apiserver health ...
	I0108 21:17:29.550756  350745 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:17:29.725255  350745 request.go:629] Waited for 174.394777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.193:8443/api/v1/namespaces/kube-system/pods
	I0108 21:17:29.731006  350745 system_pods.go:59] 7 kube-system pods found
	I0108 21:17:29.731035  350745 system_pods.go:61] "coredns-66bff467f8-pfmkn" [41cb231c-130d-4368-8185-45c5b9e90639] Running
	I0108 21:17:29.731042  350745 system_pods.go:61] "etcd-ingress-addon-legacy-798925" [deb67204-8745-42b7-9d2c-7ca9768a4768] Running
	I0108 21:17:29.731047  350745 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-798925" [148f6473-b9b4-4f98-8597-fe3a1c86633f] Running
	I0108 21:17:29.731061  350745 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-798925" [aaa9e90c-0d8c-4719-b3ba-735af603008f] Running
	I0108 21:17:29.731067  350745 system_pods.go:61] "kube-proxy-89s94" [dec80ad4-9822-42b4-9938-33218308b42c] Running
	I0108 21:17:29.731073  350745 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-798925" [068bb4ab-f19a-4f28-ad83-328a7476fd37] Running
	I0108 21:17:29.731087  350745 system_pods.go:61] "storage-provisioner" [946cb165-55dd-4595-9df8-ece71694f065] Running
	I0108 21:17:29.731096  350745 system_pods.go:74] duration metric: took 180.33256ms to wait for pod list to return data ...
	I0108 21:17:29.731110  350745 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:17:29.924591  350745 request.go:629] Waited for 193.399922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.193:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:17:29.929004  350745 default_sa.go:45] found service account: "default"
	I0108 21:17:29.929035  350745 default_sa.go:55] duration metric: took 197.914059ms for default service account to be created ...
	I0108 21:17:29.929044  350745 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:17:30.124679  350745 request.go:629] Waited for 195.559989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.193:8443/api/v1/namespaces/kube-system/pods
	I0108 21:17:30.130719  350745 system_pods.go:86] 7 kube-system pods found
	I0108 21:17:30.130751  350745 system_pods.go:89] "coredns-66bff467f8-pfmkn" [41cb231c-130d-4368-8185-45c5b9e90639] Running
	I0108 21:17:30.130757  350745 system_pods.go:89] "etcd-ingress-addon-legacy-798925" [deb67204-8745-42b7-9d2c-7ca9768a4768] Running
	I0108 21:17:30.130765  350745 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-798925" [148f6473-b9b4-4f98-8597-fe3a1c86633f] Running
	I0108 21:17:30.130772  350745 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-798925" [aaa9e90c-0d8c-4719-b3ba-735af603008f] Running
	I0108 21:17:30.130776  350745 system_pods.go:89] "kube-proxy-89s94" [dec80ad4-9822-42b4-9938-33218308b42c] Running
	I0108 21:17:30.130780  350745 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-798925" [068bb4ab-f19a-4f28-ad83-328a7476fd37] Running
	I0108 21:17:30.130784  350745 system_pods.go:89] "storage-provisioner" [946cb165-55dd-4595-9df8-ece71694f065] Running
	I0108 21:17:30.130792  350745 system_pods.go:126] duration metric: took 201.741896ms to wait for k8s-apps to be running ...
	I0108 21:17:30.130807  350745 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:17:30.130870  350745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:17:30.147259  350745 system_svc.go:56] duration metric: took 16.436796ms WaitForService to wait for kubelet.
	I0108 21:17:30.147288  350745 kubeadm.go:581] duration metric: took 42.854062438s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:17:30.147308  350745 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:17:30.324825  350745 request.go:629] Waited for 177.421039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.193:8443/api/v1/nodes
	I0108 21:17:30.328535  350745 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:17:30.328579  350745 node_conditions.go:123] node cpu capacity is 2
	I0108 21:17:30.328592  350745 node_conditions.go:105] duration metric: took 181.279523ms to run NodePressure ...
	I0108 21:17:30.328604  350745 start.go:228] waiting for startup goroutines ...
	I0108 21:17:30.328610  350745 start.go:233] waiting for cluster config update ...
	I0108 21:17:30.328636  350745 start.go:242] writing updated cluster config ...
	I0108 21:17:30.328932  350745 ssh_runner.go:195] Run: rm -f paused
	I0108 21:17:30.379640  350745 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 21:17:30.381854  350745 out.go:177] 
	W0108 21:17:30.383535  350745 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 21:17:30.385250  350745 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 21:17:30.386773  350745 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-798925" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:15:57 UTC, ends at Mon 2024-01-08 21:20:41 UTC. --
	Jan 08 21:20:40 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:40.992144245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748840992131338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=9a83887c-12a6-4f18-8af4-751f050f430b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:20:40 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:40.993004657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5aaff90b-0cda-4f10-bfe0-42f5f9a23feb name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:20:40 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:40.993047809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5aaff90b-0cda-4f10-bfe0-42f5f9a23feb name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:20:40 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:40.993414987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19ac994bdda9d1795022b00a91e494a2c39cec8a448a61cc79ec891b49a97aa6,PodSandboxId:932a3dbb25d16f0e8c19db878e9f0370246dcf327246f28ef29e7136f662de1b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704748823935909235,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-jp46d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8142407b-a384-4200-a2d2-80dd16d25b85,},Annotations:map[string]string{io.kubernetes.container.hash: 5d4e2ff,io.kub
ernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9465655d98132670b89f8b1b4bb34293ffb7a2078cb03bdb6715359a0533b24d,PodSandboxId:021bc484083ac9c9af70ba657aae70c173efb994c4a26d02a26eea1e0231c541,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704748681629919783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 694b3107-b164-4318-9cbe-351b3c7e9917,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 959099fb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e03305a18bf73979aafde124c82428b312ed2a254c80cf9bf5e7e95d1beddd,PodSandboxId:928f92554635b1965f7197d9cb7898ee69ba8fe50c5a8aabde58e4a32d93f9de,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704748662540453916,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-5tknb,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 855bd86b-1148-4d96-bdf6-1b8ca6bf2883,},Annotations:map[string]string{io.kubernetes.container.hash: 8c181d46,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:da1234f262b4aaa9b7136f538e49d421679a465422da1e670ce39a53d50c975a,PodSandboxId:597cce75c95e4ac0aec8d015eb31ca0ecebea1d9a6742daa150303b74cf26225,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704748654716896551,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q5kw5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 819e5966-4eec-4fd5-883e-6cd6e22bfe03,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a034710140b433f1bcab450221c59a9d36e6f7a9bf2d20e7b32fa54b4020ac9,PodSandboxId:7f6099e305837691bc9f4d4803380d3ddafb080563fbbbeedebba3b42805de50,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704748654269178211,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q2sb7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cc81bce8-7cfc-46ed-955f-c8994a3062c0,},Annotations:map[string]string{io.kubernetes.container.hash: 5ff41904,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218e4da8dffb0ce6d3ff14e89cd9dc4861a82420089baca828d4871e4affb0b0,PodSandboxId:d1743a465d25598668ebdebdf2359f9962af8493663da28cd73cfebcc6e097a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704748639893108298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946cb165-55dd-4595-9df8-ece71694f065,},Annotations:map[string]string{io.kubernetes.container.hash: 750bb236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390eae21c6b20091d0d90c966be9d81ef44e688eb5190db76b87d3941dac8d0,PodSandboxId:3bb39979bf6de9da9cba622c44f733762e427fc6247ca4548f522522bf436a40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{I
mage:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704748609023778869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-pfmkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41cb231c-130d-4368-8185-45c5b9e90639,},Annotations:map[string]string{io.kubernetes.container.hash: b2c2b256,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eb27cae1d918918de515db17a468
bcd75dc91485c862e2e79d70f8de14bc3dc,PodSandboxId:644d60ef5fd9483fde5d2ab65cedd35053a4997a262eebd202901c8d81e163bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704748608716345248,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-89s94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec80ad4-9822-42b4-9938-33218308b42c,},Annotations:map[string]string{io.kubernetes.container.hash: f0251d1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef3df3e05b73b97085b65268bcfb5b338d1b2a06b802426cdd9ef5531a73e492,PodS
andboxId:d1743a465d25598668ebdebdf2359f9962af8493663da28cd73cfebcc6e097a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704748608735422374,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946cb165-55dd-4595-9df8-ece71694f065,},Annotations:map[string]string{io.kubernetes.container.hash: 750bb236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5190f73feeef0dac8258ed6298f7848ad70205531352ea41df11815d4cc035,PodSan
dboxId:64be60b7a2804dd8d56e8830f49da6d3bc2e4b43a4ad5a5f75e70191e8731098,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704748584460456955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def62cbf36339665f5fd12b708d518e,},Annotations:map[string]string{io.kubernetes.container.hash: 10ec175b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b171ace1b29325feb5f424a0131ad420460ef1625f4b5e4920d592542fdc323,PodSandboxId:054b6ef2bad1ca7a4b9a6363c2a2f3ba29d8d5b
550b66b54e699c438ef33720e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704748584442585447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b0ada07c94faa97506158ce58c85f071926cb37877dca4f3d914cd22a2e903,PodSandboxId:a0f14f1a6924d15c416a5b9d4f1b3b7131a47a215cca5
f2ae5036b6fac9e36fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704748584091878795,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6513a1d9c65c0b45ba424e4fcc3ee595,},Annotations:map[string]string{io.kubernetes.container.hash: dbf1dfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2abcb2bfe2753cca0644c48755fc376ac2fb98c75f5ba6fd2c285a9a971f3b,PodSandboxId:05f9a07be02a610588ba362621545bb114359b8e28763a1a17f
26d74ab873295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704748584011742473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5aaff90b-0cda-4f10-bfe0-42f5f9a23feb name=/runtime.v1.RuntimeServic
e/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.030020881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a3db9384-837a-4dfc-861c-bfca5a70e2c5 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.030079025Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a3db9384-837a-4dfc-861c-bfca5a70e2c5 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.031361601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2793c6ae-9021-4c73-b8ab-9be7215865a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.031799269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748841031788359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=2793c6ae-9021-4c73-b8ab-9be7215865a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.032190361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5c01b7fb-3801-447b-9d79-dff224bcd7af name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.032300120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5c01b7fb-3801-447b-9d79-dff224bcd7af name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.032604801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19ac994bdda9d1795022b00a91e494a2c39cec8a448a61cc79ec891b49a97aa6,PodSandboxId:932a3dbb25d16f0e8c19db878e9f0370246dcf327246f28ef29e7136f662de1b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704748823935909235,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-jp46d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8142407b-a384-4200-a2d2-80dd16d25b85,},Annotations:map[string]string{io.kubernetes.container.hash: 5d4e2ff,io.kub
ernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9465655d98132670b89f8b1b4bb34293ffb7a2078cb03bdb6715359a0533b24d,PodSandboxId:021bc484083ac9c9af70ba657aae70c173efb994c4a26d02a26eea1e0231c541,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704748681629919783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 694b3107-b164-4318-9cbe-351b3c7e9917,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 959099fb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e03305a18bf73979aafde124c82428b312ed2a254c80cf9bf5e7e95d1beddd,PodSandboxId:928f92554635b1965f7197d9cb7898ee69ba8fe50c5a8aabde58e4a32d93f9de,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704748662540453916,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-5tknb,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 855bd86b-1148-4d96-bdf6-1b8ca6bf2883,},Annotations:map[string]string{io.kubernetes.container.hash: 8c181d46,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:da1234f262b4aaa9b7136f538e49d421679a465422da1e670ce39a53d50c975a,PodSandboxId:597cce75c95e4ac0aec8d015eb31ca0ecebea1d9a6742daa150303b74cf26225,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704748654716896551,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q5kw5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 819e5966-4eec-4fd5-883e-6cd6e22bfe03,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a034710140b433f1bcab450221c59a9d36e6f7a9bf2d20e7b32fa54b4020ac9,PodSandboxId:7f6099e305837691bc9f4d4803380d3ddafb080563fbbbeedebba3b42805de50,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704748654269178211,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q2sb7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cc81bce8-7cfc-46ed-955f-c8994a3062c0,},Annotations:map[string]string{io.kubernetes.container.hash: 5ff41904,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218e4da8dffb0ce6d3ff14e89cd9dc4861a82420089baca828d4871e4affb0b0,PodSandboxId:d1743a465d25598668ebdebdf2359f9962af8493663da28cd73cfebcc6e097a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704748639893108298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946cb165-55dd-4595-9df8-ece71694f065,},Annotations:map[string]string{io.kubernetes.container.hash: 750bb236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390eae21c6b20091d0d90c966be9d81ef44e688eb5190db76b87d3941dac8d0,PodSandboxId:3bb39979bf6de9da9cba622c44f733762e427fc6247ca4548f522522bf436a40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{I
mage:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704748609023778869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-pfmkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41cb231c-130d-4368-8185-45c5b9e90639,},Annotations:map[string]string{io.kubernetes.container.hash: b2c2b256,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eb27cae1d918918de515db17a468
bcd75dc91485c862e2e79d70f8de14bc3dc,PodSandboxId:644d60ef5fd9483fde5d2ab65cedd35053a4997a262eebd202901c8d81e163bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704748608716345248,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-89s94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec80ad4-9822-42b4-9938-33218308b42c,},Annotations:map[string]string{io.kubernetes.container.hash: f0251d1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef3df3e05b73b97085b65268bcfb5b338d1b2a06b802426cdd9ef5531a73e492,PodS
andboxId:d1743a465d25598668ebdebdf2359f9962af8493663da28cd73cfebcc6e097a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704748608735422374,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946cb165-55dd-4595-9df8-ece71694f065,},Annotations:map[string]string{io.kubernetes.container.hash: 750bb236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5190f73feeef0dac8258ed6298f7848ad70205531352ea41df11815d4cc035,PodSan
dboxId:64be60b7a2804dd8d56e8830f49da6d3bc2e4b43a4ad5a5f75e70191e8731098,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704748584460456955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def62cbf36339665f5fd12b708d518e,},Annotations:map[string]string{io.kubernetes.container.hash: 10ec175b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b171ace1b29325feb5f424a0131ad420460ef1625f4b5e4920d592542fdc323,PodSandboxId:054b6ef2bad1ca7a4b9a6363c2a2f3ba29d8d5b
550b66b54e699c438ef33720e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704748584442585447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b0ada07c94faa97506158ce58c85f071926cb37877dca4f3d914cd22a2e903,PodSandboxId:a0f14f1a6924d15c416a5b9d4f1b3b7131a47a215cca5
f2ae5036b6fac9e36fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704748584091878795,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6513a1d9c65c0b45ba424e4fcc3ee595,},Annotations:map[string]string{io.kubernetes.container.hash: dbf1dfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2abcb2bfe2753cca0644c48755fc376ac2fb98c75f5ba6fd2c285a9a971f3b,PodSandboxId:05f9a07be02a610588ba362621545bb114359b8e28763a1a17f
26d74ab873295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704748584011742473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5c01b7fb-3801-447b-9d79-dff224bcd7af name=/runtime.v1.RuntimeServic
e/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.067981543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8e2afaf0-d99b-42c4-86a8-5df1af05756a name=/runtime.v1.RuntimeService/Version
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.068094177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8e2afaf0-d99b-42c4-86a8-5df1af05756a name=/runtime.v1.RuntimeService/Version
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.069591809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7facfc41-e3fc-4c72-a70b-0337811f7704 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.070035902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748841070024370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=7facfc41-e3fc-4c72-a70b-0337811f7704 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.070819670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=42fba80f-ec3a-4555-83a5-f625edbd954d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.070867810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=42fba80f-ec3a-4555-83a5-f625edbd954d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.071193236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19ac994bdda9d1795022b00a91e494a2c39cec8a448a61cc79ec891b49a97aa6,PodSandboxId:932a3dbb25d16f0e8c19db878e9f0370246dcf327246f28ef29e7136f662de1b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704748823935909235,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-jp46d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8142407b-a384-4200-a2d2-80dd16d25b85,},Annotations:map[string]string{io.kubernetes.container.hash: 5d4e2ff,io.kub
ernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9465655d98132670b89f8b1b4bb34293ffb7a2078cb03bdb6715359a0533b24d,PodSandboxId:021bc484083ac9c9af70ba657aae70c173efb994c4a26d02a26eea1e0231c541,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704748681629919783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 694b3107-b164-4318-9cbe-351b3c7e9917,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 959099fb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e03305a18bf73979aafde124c82428b312ed2a254c80cf9bf5e7e95d1beddd,PodSandboxId:928f92554635b1965f7197d9cb7898ee69ba8fe50c5a8aabde58e4a32d93f9de,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704748662540453916,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-5tknb,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 855bd86b-1148-4d96-bdf6-1b8ca6bf2883,},Annotations:map[string]string{io.kubernetes.container.hash: 8c181d46,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:da1234f262b4aaa9b7136f538e49d421679a465422da1e670ce39a53d50c975a,PodSandboxId:597cce75c95e4ac0aec8d015eb31ca0ecebea1d9a6742daa150303b74cf26225,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704748654716896551,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q5kw5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 819e5966-4eec-4fd5-883e-6cd6e22bfe03,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a034710140b433f1bcab450221c59a9d36e6f7a9bf2d20e7b32fa54b4020ac9,PodSandboxId:7f6099e305837691bc9f4d4803380d3ddafb080563fbbbeedebba3b42805de50,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704748654269178211,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q2sb7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cc81bce8-7cfc-46ed-955f-c8994a3062c0,},Annotations:map[string]string{io.kubernetes.container.hash: 5ff41904,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218e4da8dffb0ce6d3ff14e89cd9dc4861a82420089baca828d4871e4affb0b0,PodSandboxId:d1743a465d25598668ebdebdf2359f9962af8493663da28cd73cfebcc6e097a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704748639893108298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946cb165-55dd-4595-9df8-ece71694f065,},Annotations:map[string]string{io.kubernetes.container.hash: 750bb236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390eae21c6b20091d0d90c966be9d81ef44e688eb5190db76b87d3941dac8d0,PodSandboxId:3bb39979bf6de9da9cba622c44f733762e427fc6247ca4548f522522bf436a40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{I
mage:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704748609023778869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-pfmkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41cb231c-130d-4368-8185-45c5b9e90639,},Annotations:map[string]string{io.kubernetes.container.hash: b2c2b256,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eb27cae1d918918de515db17a468
bcd75dc91485c862e2e79d70f8de14bc3dc,PodSandboxId:644d60ef5fd9483fde5d2ab65cedd35053a4997a262eebd202901c8d81e163bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704748608716345248,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-89s94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec80ad4-9822-42b4-9938-33218308b42c,},Annotations:map[string]string{io.kubernetes.container.hash: f0251d1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef3df3e05b73b97085b65268bcfb5b338d1b2a06b802426cdd9ef5531a73e492,PodS
andboxId:d1743a465d25598668ebdebdf2359f9962af8493663da28cd73cfebcc6e097a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704748608735422374,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946cb165-55dd-4595-9df8-ece71694f065,},Annotations:map[string]string{io.kubernetes.container.hash: 750bb236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5190f73feeef0dac8258ed6298f7848ad70205531352ea41df11815d4cc035,PodSan
dboxId:64be60b7a2804dd8d56e8830f49da6d3bc2e4b43a4ad5a5f75e70191e8731098,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704748584460456955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def62cbf36339665f5fd12b708d518e,},Annotations:map[string]string{io.kubernetes.container.hash: 10ec175b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b171ace1b29325feb5f424a0131ad420460ef1625f4b5e4920d592542fdc323,PodSandboxId:054b6ef2bad1ca7a4b9a6363c2a2f3ba29d8d5b
550b66b54e699c438ef33720e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704748584442585447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b0ada07c94faa97506158ce58c85f071926cb37877dca4f3d914cd22a2e903,PodSandboxId:a0f14f1a6924d15c416a5b9d4f1b3b7131a47a215cca5
f2ae5036b6fac9e36fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704748584091878795,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6513a1d9c65c0b45ba424e4fcc3ee595,},Annotations:map[string]string{io.kubernetes.container.hash: dbf1dfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2abcb2bfe2753cca0644c48755fc376ac2fb98c75f5ba6fd2c285a9a971f3b,PodSandboxId:05f9a07be02a610588ba362621545bb114359b8e28763a1a17f
26d74ab873295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704748584011742473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=42fba80f-ec3a-4555-83a5-f625edbd954d name=/runtime.v1.RuntimeServic
e/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.116986182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2562b0db-eb0a-4e4b-8c7a-24fdadbe9851 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.117043415Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2562b0db-eb0a-4e4b-8c7a-24fdadbe9851 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.117883970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=20050ef4-9cfe-4458-b941-0e8f10de728c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.118460809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748841118443040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=20050ef4-9cfe-4458-b941-0e8f10de728c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.118962962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8fbdbbb7-5db2-42d3-94a4-6a18c177ea91 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.119007267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8fbdbbb7-5db2-42d3-94a4-6a18c177ea91 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:20:41 ingress-addon-legacy-798925 crio[723]: time="2024-01-08 21:20:41.119434076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19ac994bdda9d1795022b00a91e494a2c39cec8a448a61cc79ec891b49a97aa6,PodSandboxId:932a3dbb25d16f0e8c19db878e9f0370246dcf327246f28ef29e7136f662de1b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704748823935909235,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-jp46d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8142407b-a384-4200-a2d2-80dd16d25b85,},Annotations:map[string]string{io.kubernetes.container.hash: 5d4e2ff,io.kub
ernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9465655d98132670b89f8b1b4bb34293ffb7a2078cb03bdb6715359a0533b24d,PodSandboxId:021bc484083ac9c9af70ba657aae70c173efb994c4a26d02a26eea1e0231c541,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704748681629919783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 694b3107-b164-4318-9cbe-351b3c7e9917,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 959099fb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e03305a18bf73979aafde124c82428b312ed2a254c80cf9bf5e7e95d1beddd,PodSandboxId:928f92554635b1965f7197d9cb7898ee69ba8fe50c5a8aabde58e4a32d93f9de,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704748662540453916,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-5tknb,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 855bd86b-1148-4d96-bdf6-1b8ca6bf2883,},Annotations:map[string]string{io.kubernetes.container.hash: 8c181d46,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:da1234f262b4aaa9b7136f538e49d421679a465422da1e670ce39a53d50c975a,PodSandboxId:597cce75c95e4ac0aec8d015eb31ca0ecebea1d9a6742daa150303b74cf26225,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704748654716896551,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q5kw5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 819e5966-4eec-4fd5-883e-6cd6e22bfe03,},Annotations:map[string]string{io.kubernetes.container.hash: 861ab324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a034710140b433f1bcab450221c59a9d36e6f7a9bf2d20e7b32fa54b4020ac9,PodSandboxId:7f6099e305837691bc9f4d4803380d3ddafb080563fbbbeedebba3b42805de50,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704748654269178211,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q2sb7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cc81bce8-7cfc-46ed-955f-c8994a3062c0,},Annotations:map[string]string{io.kubernetes.container.hash: 5ff41904,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218e4da8dffb0ce6d3ff14e89cd9dc4861a82420089baca828d4871e4affb0b0,PodSandboxId:d1743a465d25598668ebdebdf2359f9962af8493663da28cd73cfebcc6e097a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704748639893108298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946cb165-55dd-4595-9df8-ece71694f065,},Annotations:map[string]string{io.kubernetes.container.hash: 750bb236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1390eae21c6b20091d0d90c966be9d81ef44e688eb5190db76b87d3941dac8d0,PodSandboxId:3bb39979bf6de9da9cba622c44f733762e427fc6247ca4548f522522bf436a40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{I
mage:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704748609023778869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-pfmkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41cb231c-130d-4368-8185-45c5b9e90639,},Annotations:map[string]string{io.kubernetes.container.hash: b2c2b256,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eb27cae1d918918de515db17a468
bcd75dc91485c862e2e79d70f8de14bc3dc,PodSandboxId:644d60ef5fd9483fde5d2ab65cedd35053a4997a262eebd202901c8d81e163bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704748608716345248,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-89s94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec80ad4-9822-42b4-9938-33218308b42c,},Annotations:map[string]string{io.kubernetes.container.hash: f0251d1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef3df3e05b73b97085b65268bcfb5b338d1b2a06b802426cdd9ef5531a73e492,PodS
andboxId:d1743a465d25598668ebdebdf2359f9962af8493663da28cd73cfebcc6e097a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704748608735422374,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946cb165-55dd-4595-9df8-ece71694f065,},Annotations:map[string]string{io.kubernetes.container.hash: 750bb236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f5190f73feeef0dac8258ed6298f7848ad70205531352ea41df11815d4cc035,PodSan
dboxId:64be60b7a2804dd8d56e8830f49da6d3bc2e4b43a4ad5a5f75e70191e8731098,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704748584460456955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6def62cbf36339665f5fd12b708d518e,},Annotations:map[string]string{io.kubernetes.container.hash: 10ec175b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b171ace1b29325feb5f424a0131ad420460ef1625f4b5e4920d592542fdc323,PodSandboxId:054b6ef2bad1ca7a4b9a6363c2a2f3ba29d8d5b
550b66b54e699c438ef33720e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704748584442585447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b0ada07c94faa97506158ce58c85f071926cb37877dca4f3d914cd22a2e903,PodSandboxId:a0f14f1a6924d15c416a5b9d4f1b3b7131a47a215cca5
f2ae5036b6fac9e36fa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704748584091878795,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6513a1d9c65c0b45ba424e4fcc3ee595,},Annotations:map[string]string{io.kubernetes.container.hash: dbf1dfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2abcb2bfe2753cca0644c48755fc376ac2fb98c75f5ba6fd2c285a9a971f3b,PodSandboxId:05f9a07be02a610588ba362621545bb114359b8e28763a1a17f
26d74ab873295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704748584011742473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-798925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8fbdbbb7-5db2-42d3-94a4-6a18c177ea91 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	19ac994bdda9d       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            17 seconds ago      Running             hello-world-app           0                   932a3dbb25d16       hello-world-app-5f5d8b66bb-jp46d
	9465655d98132       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   021bc484083ac       nginx
	34e03305a18bf       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   928f92554635b       ingress-nginx-controller-7fcf777cb7-5tknb
	da1234f262b4a       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   597cce75c95e4       ingress-nginx-admission-patch-q5kw5
	8a034710140b4       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   7f6099e305837       ingress-nginx-admission-create-q2sb7
	218e4da8dffb0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       1                   d1743a465d255       storage-provisioner
	1390eae21c6b2       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   3bb39979bf6de       coredns-66bff467f8-pfmkn
	ef3df3e05b73b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   d1743a465d255       storage-provisioner
	4eb27cae1d918       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   644d60ef5fd94       kube-proxy-89s94
	2f5190f73feee       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   64be60b7a2804       etcd-ingress-addon-legacy-798925
	1b171ace1b293       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   054b6ef2bad1c       kube-scheduler-ingress-addon-legacy-798925
	27b0ada07c94f       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   a0f14f1a6924d       kube-apiserver-ingress-addon-legacy-798925
	bc2abcb2bfe27       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   05f9a07be02a6       kube-controller-manager-ingress-addon-legacy-798925
	
	
	==> coredns [1390eae21c6b20091d0d90c966be9d81ef44e688eb5190db76b87d3941dac8d0] <==
	[INFO] 10.244.0.5:47873 - 13095 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000157238s
	[INFO] 10.244.0.5:47873 - 56291 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067754s
	[INFO] 10.244.0.5:47873 - 14858 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063157s
	[INFO] 10.244.0.5:47873 - 58230 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000162891s
	[INFO] 10.244.0.5:45960 - 27286 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080173s
	[INFO] 10.244.0.5:45960 - 54648 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061718s
	[INFO] 10.244.0.5:45960 - 47238 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000133789s
	[INFO] 10.244.0.5:45960 - 60484 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000052817s
	[INFO] 10.244.0.5:45960 - 52927 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055861s
	[INFO] 10.244.0.5:45960 - 13818 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050439s
	[INFO] 10.244.0.5:45960 - 56039 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060398s
	[INFO] 10.244.0.5:40150 - 58142 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000097274s
	[INFO] 10.244.0.5:44773 - 53044 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099899s
	[INFO] 10.244.0.5:44773 - 56257 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044317s
	[INFO] 10.244.0.5:40150 - 7450 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030551s
	[INFO] 10.244.0.5:44773 - 1393 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000107586s
	[INFO] 10.244.0.5:40150 - 38001 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000286073s
	[INFO] 10.244.0.5:44773 - 35324 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00022618s
	[INFO] 10.244.0.5:40150 - 31952 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000086303s
	[INFO] 10.244.0.5:44773 - 28097 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025869s
	[INFO] 10.244.0.5:44773 - 3157 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058919s
	[INFO] 10.244.0.5:44773 - 25147 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006914s
	[INFO] 10.244.0.5:40150 - 45622 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000115801s
	[INFO] 10.244.0.5:40150 - 3239 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066208s
	[INFO] 10.244.0.5:40150 - 18315 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070029s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-798925
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-798925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=ingress-addon-legacy-798925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_16_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:16:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-798925
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:20:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:20:32 +0000   Mon, 08 Jan 2024 21:16:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:20:32 +0000   Mon, 08 Jan 2024 21:16:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:20:32 +0000   Mon, 08 Jan 2024 21:16:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:20:32 +0000   Mon, 08 Jan 2024 21:16:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ingress-addon-legacy-798925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 81cce45eaaf54d3da175b7759bcb2552
	  System UUID:                81cce45e-aaf5-4d3d-a175-b7759bcb2552
	  Boot ID:                    0bc6d48a-a9b5-492e-aab8-0138e25f8ff2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-jp46d                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-66bff467f8-pfmkn                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m55s
	  kube-system                 etcd-ingress-addon-legacy-798925                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-apiserver-ingress-addon-legacy-798925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-798925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-89s94                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-scheduler-ingress-addon-legacy-798925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m19s (x5 over 4m19s)  kubelet     Node ingress-addon-legacy-798925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x5 over 4m19s)  kubelet     Node ingress-addon-legacy-798925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x5 over 4m19s)  kubelet     Node ingress-addon-legacy-798925 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node ingress-addon-legacy-798925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node ingress-addon-legacy-798925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node ingress-addon-legacy-798925 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m59s                  kubelet     Node ingress-addon-legacy-798925 status is now: NodeReady
	  Normal  Starting                 3m52s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 8 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092127] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.409781] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.322563] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140263] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan 8 21:16] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.198501] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.106146] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.139501] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.129462] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.235767] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +7.872288] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
	[  +2.609359] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.337528] systemd-fstab-generator[1433]: Ignoring "noauto" for root device
	[ +16.880327] kauditd_printk_skb: 6 callbacks suppressed
	[Jan 8 21:17] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.126251] kauditd_printk_skb: 6 callbacks suppressed
	[ +22.666103] kauditd_printk_skb: 7 callbacks suppressed
	[Jan 8 21:18] kauditd_printk_skb: 3 callbacks suppressed
	[Jan 8 21:20] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [2f5190f73feeef0dac8258ed6298f7848ad70205531352ea41df11815d4cc035] <==
	2024-01-08 21:16:25.417955 W | auth: simple token is not cryptographically signed
	2024-01-08 21:16:25.422677 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-08 21:16:25.429330 I | etcdserver: 97ba5874d4d591f6 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/08 21:16:25 INFO: 97ba5874d4d591f6 switched to configuration voters=(10933148304205517302)
	2024-01-08 21:16:25.430332 I | etcdserver/membership: added member 97ba5874d4d591f6 [https://192.168.39.193:2380] to cluster 9afeb12ac4c1a90a
	2024-01-08 21:16:25.432688 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 21:16:25.432802 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-08 21:16:25.432960 I | embed: listening for peers on 192.168.39.193:2380
	raft2024/01/08 21:16:25 INFO: 97ba5874d4d591f6 is starting a new election at term 1
	raft2024/01/08 21:16:25 INFO: 97ba5874d4d591f6 became candidate at term 2
	raft2024/01/08 21:16:25 INFO: 97ba5874d4d591f6 received MsgVoteResp from 97ba5874d4d591f6 at term 2
	raft2024/01/08 21:16:25 INFO: 97ba5874d4d591f6 became leader at term 2
	raft2024/01/08 21:16:25 INFO: raft.node: 97ba5874d4d591f6 elected leader 97ba5874d4d591f6 at term 2
	2024-01-08 21:16:25.702639 I | etcdserver: published {Name:ingress-addon-legacy-798925 ClientURLs:[https://192.168.39.193:2379]} to cluster 9afeb12ac4c1a90a
	2024-01-08 21:16:25.702766 I | embed: ready to serve client requests
	2024-01-08 21:16:25.703673 I | embed: ready to serve client requests
	2024-01-08 21:16:25.704419 I | embed: serving client requests on 192.168.39.193:2379
	2024-01-08 21:16:25.704519 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 21:16:25.705180 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 21:16:25.705349 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-08 21:16:25.706632 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 21:16:47.396713 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:577" took too long (105.932625ms) to execute
	2024-01-08 21:16:48.279168 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-pfmkn\" " with result "range_response_count:1 size:4517" took too long (155.769674ms) to execute
	2024-01-08 21:16:48.279456 W | etcdserver: request "header:<ID:10517748920431525272 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bff467f8-pfmkn.17a87bd3b4fa0cb2\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bff467f8-pfmkn.17a87bd3b4fa0cb2\" value_size:759 lease:1294376883576749036 >> failure:<>>" with result "size:16" took too long (112.681098ms) to execute
	2024-01-08 21:18:01.420541 W | etcdserver: read-only range request "key:\"/registry/events/ingress-nginx/ingress-nginx-controller-7fcf777cb7-5tknb.17a87be0c26bab83\" " with result "range_response_count:1 size:839" took too long (102.884879ms) to execute
	
	
	==> kernel <==
	 21:20:41 up 4 min,  0 users,  load average: 0.48, 0.33, 0.15
	Linux ingress-addon-legacy-798925 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [27b0ada07c94faa97506158ce58c85f071926cb37877dca4f3d914cd22a2e903] <==
	I0108 21:16:28.853837       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I0108 21:16:28.905579       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0108 21:16:28.907490       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:16:28.907580       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:16:28.907592       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:16:28.954628       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 21:16:29.801824       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 21:16:29.801872       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:16:29.809620       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 21:16:29.816500       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:16:29.816595       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 21:16:30.279954       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:16:30.325940       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 21:16:30.463547       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.193]
	I0108 21:16:30.464681       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 21:16:30.468111       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:16:31.161306       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	E0108 21:16:32.027062       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	I0108 21:16:32.118872       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 21:16:32.241320       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 21:16:32.554182       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:16:46.802870       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 21:16:46.929897       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 21:17:31.229831       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 21:17:57.799529       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [bc2abcb2bfe2753cca0644c48755fc376ac2fb98c75f5ba6fd2c285a9a971f3b] <==
	I0108 21:16:46.856699       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0108 21:16:46.861331       1 shared_informer.go:230] Caches are synced for GC 
	I0108 21:16:46.887872       1 shared_informer.go:230] Caches are synced for TTL 
	I0108 21:16:46.895874       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0108 21:16:46.909196       1 shared_informer.go:230] Caches are synced for node 
	I0108 21:16:46.910063       1 range_allocator.go:172] Starting range CIDR allocator
	I0108 21:16:46.910145       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I0108 21:16:46.910153       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I0108 21:16:46.945815       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 21:16:46.962069       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 21:16:47.007903       1 range_allocator.go:373] Set node ingress-addon-legacy-798925 PodCIDR to [10.244.0.0/24]
	I0108 21:16:47.026015       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"4e735794-8f1b-4d76-aa6e-9d1e2ddda13c", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-89s94
	I0108 21:16:47.126632       1 shared_informer.go:230] Caches are synced for attach detach 
	I0108 21:16:47.149529       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 21:16:47.167753       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 21:16:47.167791       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 21:17:31.234072       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"92041825-aa83-4250-a460-ff8fa64bb50b", APIVersion:"apps/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-5tknb
	I0108 21:17:31.238043       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"8047e741-6627-4ed5-82ed-c77817af942d", APIVersion:"apps/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 21:17:31.280945       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b4ac2cb1-7cba-4f1e-bfe6-9d5878ce3998", APIVersion:"batch/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-q2sb7
	I0108 21:17:31.391845       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"fd3aa91a-030a-4237-b502-3f284085af3a", APIVersion:"batch/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-q5kw5
	I0108 21:17:34.941451       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b4ac2cb1-7cba-4f1e-bfe6-9d5878ce3998", APIVersion:"batch/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 21:17:34.974489       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"fd3aa91a-030a-4237-b502-3f284085af3a", APIVersion:"batch/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 21:20:20.857811       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b6620ba7-87d5-4faf-a1dd-a216c472ba2f", APIVersion:"apps/v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 21:20:20.885909       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"4a9a2302-fa10-4821-af86-e685e96b2be3", APIVersion:"apps/v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-jp46d
	E0108 21:20:38.417430       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-shbj8" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [4eb27cae1d918918de515db17a468bcd75dc91485c862e2e79d70f8de14bc3dc] <==
	W0108 21:16:49.194427       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 21:16:49.203617       1 node.go:136] Successfully retrieved node IP: 192.168.39.193
	I0108 21:16:49.203682       1 server_others.go:186] Using iptables Proxier.
	I0108 21:16:49.203893       1 server.go:583] Version: v1.18.20
	I0108 21:16:49.206447       1 config.go:315] Starting service config controller
	I0108 21:16:49.206611       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 21:16:49.206725       1 config.go:133] Starting endpoints config controller
	I0108 21:16:49.206736       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 21:16:49.306836       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0108 21:16:49.306961       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [1b171ace1b29325feb5f424a0131ad420460ef1625f4b5e4920d592542fdc323] <==
	I0108 21:16:28.934726       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 21:16:28.936691       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0108 21:16:28.937156       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0108 21:16:28.941769       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:16:28.941893       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:16:28.942051       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:16:28.942150       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:16:28.942307       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:16:28.942392       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:16:28.942463       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:16:28.942541       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:16:28.942635       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:16:28.942710       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0108 21:16:28.945059       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0108 21:16:28.937200       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0108 21:16:28.947095       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:16:28.948702       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:16:29.782140       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:16:29.813673       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:16:29.884487       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:16:29.974293       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:16:30.006700       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:16:30.033642       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:16:30.055172       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0108 21:16:33.146306       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:15:57 UTC, ends at Mon 2024-01-08 21:20:41 UTC. --
	Jan 08 21:17:44 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:17:44.754377    1440 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-q2kbq" (UniqueName: "kubernetes.io/secret/d8f955c8-980c-4746-898d-0d46dc3183e6-minikube-ingress-dns-token-q2kbq") pod "kube-ingress-dns-minikube" (UID: "d8f955c8-980c-4746-898d-0d46dc3183e6")
	Jan 08 21:17:58 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:17:58.035117    1440 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 21:17:58 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:17:58.198608    1440 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9bdt2" (UniqueName: "kubernetes.io/secret/694b3107-b164-4318-9cbe-351b3c7e9917-default-token-9bdt2") pod "nginx" (UID: "694b3107-b164-4318-9cbe-351b3c7e9917")
	Jan 08 21:20:20 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:20.894259    1440 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 21:20:21 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:21.072762    1440 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9bdt2" (UniqueName: "kubernetes.io/secret/8142407b-a384-4200-a2d2-80dd16d25b85-default-token-9bdt2") pod "hello-world-app-5f5d8b66bb-jp46d" (UID: "8142407b-a384-4200-a2d2-80dd16d25b85")
	Jan 08 21:20:21 ingress-addon-legacy-798925 kubelet[1440]: E0108 21:20:21.875492    1440 secret.go:195] Couldn't get secret kube-system/minikube-ingress-dns-token-q2kbq: secret "minikube-ingress-dns-token-q2kbq" not found
	Jan 08 21:20:21 ingress-addon-legacy-798925 kubelet[1440]: E0108 21:20:21.875689    1440 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d8f955c8-980c-4746-898d-0d46dc3183e6-minikube-ingress-dns-token-q2kbq podName:d8f955c8-980c-4746-898d-0d46dc3183e6 nodeName:}" failed. No retries permitted until 2024-01-08 21:20:22.375662175 +0000 UTC m=+230.312171245 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-q2kbq\" (UniqueName: \"kubernetes.io/secret/d8f955c8-980c-4746-898d-0d46dc3183e6-minikube-ingress-dns-token-q2kbq\") pod \"kube-ingress-dns-minikube\" (UID: \"d8f955c8-980c-4746-898d-0d46dc3183e6\") : secret \"minikube-ingress-dns-token-q2kbq\" not found"
	Jan 08 21:20:22 ingress-addon-legacy-798925 kubelet[1440]: E0108 21:20:22.379310    1440 secret.go:195] Couldn't get secret kube-system/minikube-ingress-dns-token-q2kbq: secret "minikube-ingress-dns-token-q2kbq" not found
	Jan 08 21:20:22 ingress-addon-legacy-798925 kubelet[1440]: E0108 21:20:22.379461    1440 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d8f955c8-980c-4746-898d-0d46dc3183e6-minikube-ingress-dns-token-q2kbq podName:d8f955c8-980c-4746-898d-0d46dc3183e6 nodeName:}" failed. No retries permitted until 2024-01-08 21:20:23.37943844 +0000 UTC m=+231.315947517 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-q2kbq\" (UniqueName: \"kubernetes.io/secret/d8f955c8-980c-4746-898d-0d46dc3183e6-minikube-ingress-dns-token-q2kbq\") pod \"kube-ingress-dns-minikube\" (UID: \"d8f955c8-980c-4746-898d-0d46dc3183e6\") : secret \"minikube-ingress-dns-token-q2kbq\" not found"
	Jan 08 21:20:23 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:23.191581    1440 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dbb5b97350ef3248f59856c14417e51649e57f0c22c1c489c36223b0f11d00aa
	Jan 08 21:20:23 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:23.383555    1440 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-q2kbq" (UniqueName: "kubernetes.io/secret/d8f955c8-980c-4746-898d-0d46dc3183e6-minikube-ingress-dns-token-q2kbq") pod "d8f955c8-980c-4746-898d-0d46dc3183e6" (UID: "d8f955c8-980c-4746-898d-0d46dc3183e6")
	Jan 08 21:20:23 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:23.393315    1440 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f955c8-980c-4746-898d-0d46dc3183e6-minikube-ingress-dns-token-q2kbq" (OuterVolumeSpecName: "minikube-ingress-dns-token-q2kbq") pod "d8f955c8-980c-4746-898d-0d46dc3183e6" (UID: "d8f955c8-980c-4746-898d-0d46dc3183e6"). InnerVolumeSpecName "minikube-ingress-dns-token-q2kbq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 21:20:23 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:23.483882    1440 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-q2kbq" (UniqueName: "kubernetes.io/secret/d8f955c8-980c-4746-898d-0d46dc3183e6-minikube-ingress-dns-token-q2kbq") on node "ingress-addon-legacy-798925" DevicePath ""
	Jan 08 21:20:23 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:23.507988    1440 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dbb5b97350ef3248f59856c14417e51649e57f0c22c1c489c36223b0f11d00aa
	Jan 08 21:20:23 ingress-addon-legacy-798925 kubelet[1440]: E0108 21:20:23.508590    1440 remote_runtime.go:295] ContainerStatus "dbb5b97350ef3248f59856c14417e51649e57f0c22c1c489c36223b0f11d00aa" from runtime service failed: rpc error: code = NotFound desc = could not find container "dbb5b97350ef3248f59856c14417e51649e57f0c22c1c489c36223b0f11d00aa": container with ID starting with dbb5b97350ef3248f59856c14417e51649e57f0c22c1c489c36223b0f11d00aa not found: ID does not exist
	Jan 08 21:20:33 ingress-addon-legacy-798925 kubelet[1440]: E0108 21:20:33.598901    1440 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5tknb.17a87c0831afb780", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5tknb", UID:"855bd86b-1148-4d96-bdf6-1b8ca6bf2883", APIVersion:"v1", ResourceVersion:"448", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-798925"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f37a86375ad80, ext:241531423751, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f37a86375ad80, ext:241531423751, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5tknb.17a87c0831afb780" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 21:20:33 ingress-addon-legacy-798925 kubelet[1440]: E0108 21:20:33.617788    1440 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5tknb.17a87c0831afb780", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5tknb", UID:"855bd86b-1148-4d96-bdf6-1b8ca6bf2883", APIVersion:"v1", ResourceVersion:"448", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-798925"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f37a86375ad80, ext:241531423751, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f37a8645304a1, ext:241545929511, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5tknb.17a87c0831afb780" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 21:20:36 ingress-addon-legacy-798925 kubelet[1440]: W0108 21:20:36.243132    1440 pod_container_deletor.go:77] Container "928f92554635b1965f7197d9cb7898ee69ba8fe50c5a8aabde58e4a32d93f9de" not found in pod's containers
	Jan 08 21:20:37 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:37.727454    1440 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-wklpf" (UniqueName: "kubernetes.io/secret/855bd86b-1148-4d96-bdf6-1b8ca6bf2883-ingress-nginx-token-wklpf") pod "855bd86b-1148-4d96-bdf6-1b8ca6bf2883" (UID: "855bd86b-1148-4d96-bdf6-1b8ca6bf2883")
	Jan 08 21:20:37 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:37.727499    1440 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/855bd86b-1148-4d96-bdf6-1b8ca6bf2883-webhook-cert") pod "855bd86b-1148-4d96-bdf6-1b8ca6bf2883" (UID: "855bd86b-1148-4d96-bdf6-1b8ca6bf2883")
	Jan 08 21:20:37 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:37.731769    1440 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855bd86b-1148-4d96-bdf6-1b8ca6bf2883-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "855bd86b-1148-4d96-bdf6-1b8ca6bf2883" (UID: "855bd86b-1148-4d96-bdf6-1b8ca6bf2883"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 21:20:37 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:37.732423    1440 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855bd86b-1148-4d96-bdf6-1b8ca6bf2883-ingress-nginx-token-wklpf" (OuterVolumeSpecName: "ingress-nginx-token-wklpf") pod "855bd86b-1148-4d96-bdf6-1b8ca6bf2883" (UID: "855bd86b-1148-4d96-bdf6-1b8ca6bf2883"). InnerVolumeSpecName "ingress-nginx-token-wklpf". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 21:20:37 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:37.827906    1440 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/855bd86b-1148-4d96-bdf6-1b8ca6bf2883-webhook-cert") on node "ingress-addon-legacy-798925" DevicePath ""
	Jan 08 21:20:37 ingress-addon-legacy-798925 kubelet[1440]: I0108 21:20:37.827971    1440 reconciler.go:319] Volume detached for volume "ingress-nginx-token-wklpf" (UniqueName: "kubernetes.io/secret/855bd86b-1148-4d96-bdf6-1b8ca6bf2883-ingress-nginx-token-wklpf") on node "ingress-addon-legacy-798925" DevicePath ""
	Jan 08 21:20:38 ingress-addon-legacy-798925 kubelet[1440]: W0108 21:20:38.689144    1440 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/855bd86b-1148-4d96-bdf6-1b8ca6bf2883/volumes" does not exist
	
	
	==> storage-provisioner [218e4da8dffb0ce6d3ff14e89cd9dc4861a82420089baca828d4871e4affb0b0] <==
	I0108 21:17:20.015615       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:17:20.028182       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:17:20.028316       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:17:20.036478       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:17:20.036944       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-798925_f0506b72-859f-47f1-a84c-a05f218062b9!
	I0108 21:17:20.037352       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3663d549-d49c-48f0-ad30-87cb81741163", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-798925_f0506b72-859f-47f1-a84c-a05f218062b9 became leader
	I0108 21:17:20.138080       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-798925_f0506b72-859f-47f1-a84c-a05f218062b9!
	
	
	==> storage-provisioner [ef3df3e05b73b97085b65268bcfb5b338d1b2a06b802426cdd9ef5531a73e492] <==
	I0108 21:16:48.991825       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 21:17:18.994081       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-798925 -n ingress-addon-legacy-798925
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-798925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-qwxd6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-qwxd6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-qwxd6 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (196.214413ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-qwxd6): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-wmznk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-wmznk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-wmznk -- sh -c "ping -c 1 192.168.39.1": exit status 1 (209.903948ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-wmznk): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-962345 -n multinode-962345
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-962345 logs -n 25: (1.315506692s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-169549 ssh -- ls                    | mount-start-2-169549 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:25 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-169549 ssh --                       | mount-start-2-169549 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:25 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-169549                           | mount-start-2-169549 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:25 UTC |
	| start   | -p mount-start-2-169549                           | mount-start-2-169549 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:25 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-169549 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC |                     |
	|         | --profile mount-start-2-169549                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-169549 ssh -- ls                    | mount-start-2-169549 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:25 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-169549 ssh --                       | mount-start-2-169549 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:25 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-169549                           | mount-start-2-169549 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:25 UTC |
	| delete  | -p mount-start-1-153442                           | mount-start-1-153442 | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:25 UTC |
	| start   | -p multinode-962345                               | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:25 UTC | 08 Jan 24 21:27 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- apply -f                   | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- rollout                    | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- get pods -o                | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- get pods -o                | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-qwxd6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-wmznk --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-qwxd6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-wmznk --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-qwxd6 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-wmznk -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- get pods -o                | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-qwxd6                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC |                     |
	|         | busybox-5bc68d56bd-qwxd6 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC | 08 Jan 24 21:27 UTC |
	|         | busybox-5bc68d56bd-wmznk                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-962345 -- exec                       | multinode-962345     | jenkins | v1.32.0 | 08 Jan 24 21:27 UTC |                     |
	|         | busybox-5bc68d56bd-wmznk -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:25:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:25:43.212784  355334 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:25:43.213059  355334 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:25:43.213071  355334 out.go:309] Setting ErrFile to fd 2...
	I0108 21:25:43.213076  355334 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:25:43.213257  355334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:25:43.213864  355334 out.go:303] Setting JSON to false
	I0108 21:25:43.214804  355334 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7669,"bootTime":1704741474,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:25:43.214866  355334 start.go:138] virtualization: kvm guest
	I0108 21:25:43.217222  355334 out.go:177] * [multinode-962345] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:25:43.218649  355334 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:25:43.220016  355334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:25:43.218733  355334 notify.go:220] Checking for updates...
	I0108 21:25:43.222995  355334 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:25:43.224394  355334 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:25:43.225745  355334 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:25:43.227061  355334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:25:43.228543  355334 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:25:43.264331  355334 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:25:43.265919  355334 start.go:298] selected driver: kvm2
	I0108 21:25:43.265938  355334 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:25:43.265949  355334 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:25:43.266623  355334 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:25:43.266698  355334 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:25:43.281143  355334 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:25:43.281219  355334 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:25:43.281436  355334 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:25:43.281489  355334 cni.go:84] Creating CNI manager for ""
	I0108 21:25:43.281502  355334 cni.go:136] 0 nodes found, recommending kindnet
	I0108 21:25:43.281508  355334 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:25:43.281516  355334 start_flags.go:321] config:
	{Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:25:43.281655  355334 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:25:43.283410  355334 out.go:177] * Starting control plane node multinode-962345 in cluster multinode-962345
	I0108 21:25:43.284854  355334 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:25:43.284903  355334 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:25:43.284912  355334 cache.go:56] Caching tarball of preloaded images
	I0108 21:25:43.284994  355334 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:25:43.285005  355334 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:25:43.285338  355334 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:25:43.285382  355334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json: {Name:mk2de0d606b9009a8fdc431d9008a7652a334a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:25:43.285514  355334 start.go:365] acquiring machines lock for multinode-962345: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:25:43.285541  355334 start.go:369] acquired machines lock for "multinode-962345" in 14.781µs
	I0108 21:25:43.285560  355334 start.go:93] Provisioning new machine with config: &{Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:25:43.285633  355334 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 21:25:43.287237  355334 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 21:25:43.287456  355334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:25:43.287505  355334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:25:43.301315  355334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0108 21:25:43.301813  355334 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:25:43.302376  355334 main.go:141] libmachine: Using API Version  1
	I0108 21:25:43.302415  355334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:25:43.302785  355334 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:25:43.302980  355334 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:25:43.303197  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:25:43.303348  355334 start.go:159] libmachine.API.Create for "multinode-962345" (driver="kvm2")
	I0108 21:25:43.303402  355334 client.go:168] LocalClient.Create starting
	I0108 21:25:43.303438  355334 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 21:25:43.303475  355334 main.go:141] libmachine: Decoding PEM data...
	I0108 21:25:43.303491  355334 main.go:141] libmachine: Parsing certificate...
	I0108 21:25:43.303565  355334 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 21:25:43.303587  355334 main.go:141] libmachine: Decoding PEM data...
	I0108 21:25:43.303602  355334 main.go:141] libmachine: Parsing certificate...
	I0108 21:25:43.303618  355334 main.go:141] libmachine: Running pre-create checks...
	I0108 21:25:43.303627  355334 main.go:141] libmachine: (multinode-962345) Calling .PreCreateCheck
	I0108 21:25:43.303987  355334 main.go:141] libmachine: (multinode-962345) Calling .GetConfigRaw
	I0108 21:25:43.304464  355334 main.go:141] libmachine: Creating machine...
	I0108 21:25:43.304483  355334 main.go:141] libmachine: (multinode-962345) Calling .Create
	I0108 21:25:43.304639  355334 main.go:141] libmachine: (multinode-962345) Creating KVM machine...
	I0108 21:25:43.305779  355334 main.go:141] libmachine: (multinode-962345) DBG | found existing default KVM network
	I0108 21:25:43.306479  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:43.306308  355356 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f220}
	I0108 21:25:43.311923  355334 main.go:141] libmachine: (multinode-962345) DBG | trying to create private KVM network mk-multinode-962345 192.168.39.0/24...
	I0108 21:25:43.382485  355334 main.go:141] libmachine: (multinode-962345) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345 ...
	I0108 21:25:43.382517  355334 main.go:141] libmachine: (multinode-962345) DBG | private KVM network mk-multinode-962345 192.168.39.0/24 created
	I0108 21:25:43.382534  355334 main.go:141] libmachine: (multinode-962345) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 21:25:43.382549  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:43.382412  355356 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:25:43.382578  355334 main.go:141] libmachine: (multinode-962345) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 21:25:43.606193  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:43.606057  355356 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa...
	I0108 21:25:43.673868  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:43.673696  355356 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/multinode-962345.rawdisk...
	I0108 21:25:43.673911  355334 main.go:141] libmachine: (multinode-962345) DBG | Writing magic tar header
	I0108 21:25:43.673954  355334 main.go:141] libmachine: (multinode-962345) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345 (perms=drwx------)
	I0108 21:25:43.673966  355334 main.go:141] libmachine: (multinode-962345) DBG | Writing SSH key tar header
	I0108 21:25:43.673979  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:43.673816  355356 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345 ...
	I0108 21:25:43.674006  355334 main.go:141] libmachine: (multinode-962345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345
	I0108 21:25:43.674025  355334 main.go:141] libmachine: (multinode-962345) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 21:25:43.674037  355334 main.go:141] libmachine: (multinode-962345) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 21:25:43.674045  355334 main.go:141] libmachine: (multinode-962345) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 21:25:43.674054  355334 main.go:141] libmachine: (multinode-962345) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 21:25:43.674067  355334 main.go:141] libmachine: (multinode-962345) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 21:25:43.674084  355334 main.go:141] libmachine: (multinode-962345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 21:25:43.674095  355334 main.go:141] libmachine: (multinode-962345) Creating domain...
	I0108 21:25:43.674110  355334 main.go:141] libmachine: (multinode-962345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:25:43.674118  355334 main.go:141] libmachine: (multinode-962345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 21:25:43.674130  355334 main.go:141] libmachine: (multinode-962345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 21:25:43.674143  355334 main.go:141] libmachine: (multinode-962345) DBG | Checking permissions on dir: /home/jenkins
	I0108 21:25:43.674158  355334 main.go:141] libmachine: (multinode-962345) DBG | Checking permissions on dir: /home
	I0108 21:25:43.674170  355334 main.go:141] libmachine: (multinode-962345) DBG | Skipping /home - not owner
	I0108 21:25:43.675156  355334 main.go:141] libmachine: (multinode-962345) define libvirt domain using xml: 
	I0108 21:25:43.675182  355334 main.go:141] libmachine: (multinode-962345) <domain type='kvm'>
	I0108 21:25:43.675190  355334 main.go:141] libmachine: (multinode-962345)   <name>multinode-962345</name>
	I0108 21:25:43.675196  355334 main.go:141] libmachine: (multinode-962345)   <memory unit='MiB'>2200</memory>
	I0108 21:25:43.675206  355334 main.go:141] libmachine: (multinode-962345)   <vcpu>2</vcpu>
	I0108 21:25:43.675215  355334 main.go:141] libmachine: (multinode-962345)   <features>
	I0108 21:25:43.675230  355334 main.go:141] libmachine: (multinode-962345)     <acpi/>
	I0108 21:25:43.675242  355334 main.go:141] libmachine: (multinode-962345)     <apic/>
	I0108 21:25:43.675251  355334 main.go:141] libmachine: (multinode-962345)     <pae/>
	I0108 21:25:43.675257  355334 main.go:141] libmachine: (multinode-962345)     
	I0108 21:25:43.675265  355334 main.go:141] libmachine: (multinode-962345)   </features>
	I0108 21:25:43.675280  355334 main.go:141] libmachine: (multinode-962345)   <cpu mode='host-passthrough'>
	I0108 21:25:43.675288  355334 main.go:141] libmachine: (multinode-962345)   
	I0108 21:25:43.675293  355334 main.go:141] libmachine: (multinode-962345)   </cpu>
	I0108 21:25:43.675301  355334 main.go:141] libmachine: (multinode-962345)   <os>
	I0108 21:25:43.675309  355334 main.go:141] libmachine: (multinode-962345)     <type>hvm</type>
	I0108 21:25:43.675317  355334 main.go:141] libmachine: (multinode-962345)     <boot dev='cdrom'/>
	I0108 21:25:43.675322  355334 main.go:141] libmachine: (multinode-962345)     <boot dev='hd'/>
	I0108 21:25:43.675331  355334 main.go:141] libmachine: (multinode-962345)     <bootmenu enable='no'/>
	I0108 21:25:43.675336  355334 main.go:141] libmachine: (multinode-962345)   </os>
	I0108 21:25:43.675344  355334 main.go:141] libmachine: (multinode-962345)   <devices>
	I0108 21:25:43.675349  355334 main.go:141] libmachine: (multinode-962345)     <disk type='file' device='cdrom'>
	I0108 21:25:43.675412  355334 main.go:141] libmachine: (multinode-962345)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/boot2docker.iso'/>
	I0108 21:25:43.675442  355334 main.go:141] libmachine: (multinode-962345)       <target dev='hdc' bus='scsi'/>
	I0108 21:25:43.675456  355334 main.go:141] libmachine: (multinode-962345)       <readonly/>
	I0108 21:25:43.675471  355334 main.go:141] libmachine: (multinode-962345)     </disk>
	I0108 21:25:43.675486  355334 main.go:141] libmachine: (multinode-962345)     <disk type='file' device='disk'>
	I0108 21:25:43.675502  355334 main.go:141] libmachine: (multinode-962345)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 21:25:43.675522  355334 main.go:141] libmachine: (multinode-962345)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/multinode-962345.rawdisk'/>
	I0108 21:25:43.675540  355334 main.go:141] libmachine: (multinode-962345)       <target dev='hda' bus='virtio'/>
	I0108 21:25:43.675555  355334 main.go:141] libmachine: (multinode-962345)     </disk>
	I0108 21:25:43.675567  355334 main.go:141] libmachine: (multinode-962345)     <interface type='network'>
	I0108 21:25:43.675588  355334 main.go:141] libmachine: (multinode-962345)       <source network='mk-multinode-962345'/>
	I0108 21:25:43.675601  355334 main.go:141] libmachine: (multinode-962345)       <model type='virtio'/>
	I0108 21:25:43.675614  355334 main.go:141] libmachine: (multinode-962345)     </interface>
	I0108 21:25:43.675629  355334 main.go:141] libmachine: (multinode-962345)     <interface type='network'>
	I0108 21:25:43.675643  355334 main.go:141] libmachine: (multinode-962345)       <source network='default'/>
	I0108 21:25:43.675656  355334 main.go:141] libmachine: (multinode-962345)       <model type='virtio'/>
	I0108 21:25:43.675670  355334 main.go:141] libmachine: (multinode-962345)     </interface>
	I0108 21:25:43.675688  355334 main.go:141] libmachine: (multinode-962345)     <serial type='pty'>
	I0108 21:25:43.675703  355334 main.go:141] libmachine: (multinode-962345)       <target port='0'/>
	I0108 21:25:43.675715  355334 main.go:141] libmachine: (multinode-962345)     </serial>
	I0108 21:25:43.675730  355334 main.go:141] libmachine: (multinode-962345)     <console type='pty'>
	I0108 21:25:43.675744  355334 main.go:141] libmachine: (multinode-962345)       <target type='serial' port='0'/>
	I0108 21:25:43.675769  355334 main.go:141] libmachine: (multinode-962345)     </console>
	I0108 21:25:43.675789  355334 main.go:141] libmachine: (multinode-962345)     <rng model='virtio'>
	I0108 21:25:43.675801  355334 main.go:141] libmachine: (multinode-962345)       <backend model='random'>/dev/random</backend>
	I0108 21:25:43.675813  355334 main.go:141] libmachine: (multinode-962345)     </rng>
	I0108 21:25:43.675826  355334 main.go:141] libmachine: (multinode-962345)     
	I0108 21:25:43.675838  355334 main.go:141] libmachine: (multinode-962345)     
	I0108 21:25:43.675851  355334 main.go:141] libmachine: (multinode-962345)   </devices>
	I0108 21:25:43.675865  355334 main.go:141] libmachine: (multinode-962345) </domain>
	I0108 21:25:43.675880  355334 main.go:141] libmachine: (multinode-962345) 
	I0108 21:25:43.681234  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:41:6f:a2 in network default
	I0108 21:25:43.681847  355334 main.go:141] libmachine: (multinode-962345) Ensuring networks are active...
	I0108 21:25:43.681874  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:43.682693  355334 main.go:141] libmachine: (multinode-962345) Ensuring network default is active
	I0108 21:25:43.683011  355334 main.go:141] libmachine: (multinode-962345) Ensuring network mk-multinode-962345 is active
	I0108 21:25:43.683509  355334 main.go:141] libmachine: (multinode-962345) Getting domain xml...
	I0108 21:25:43.684456  355334 main.go:141] libmachine: (multinode-962345) Creating domain...
	I0108 21:25:44.912416  355334 main.go:141] libmachine: (multinode-962345) Waiting to get IP...
	I0108 21:25:44.913168  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:44.913596  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:44.913624  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:44.913576  355356 retry.go:31] will retry after 302.24745ms: waiting for machine to come up
	I0108 21:25:45.217089  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:45.217547  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:45.217591  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:45.217493  355356 retry.go:31] will retry after 283.426478ms: waiting for machine to come up
	I0108 21:25:45.502951  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:45.503315  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:45.503348  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:45.503262  355356 retry.go:31] will retry after 356.098646ms: waiting for machine to come up
	I0108 21:25:45.861067  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:45.861598  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:45.861630  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:45.861520  355356 retry.go:31] will retry after 533.374903ms: waiting for machine to come up
	I0108 21:25:46.396307  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:46.396778  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:46.396827  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:46.396726  355356 retry.go:31] will retry after 698.679917ms: waiting for machine to come up
	I0108 21:25:47.096902  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:47.097291  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:47.097323  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:47.097250  355356 retry.go:31] will retry after 810.433098ms: waiting for machine to come up
	I0108 21:25:47.909342  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:47.909700  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:47.909727  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:47.909660  355356 retry.go:31] will retry after 840.985603ms: waiting for machine to come up
	I0108 21:25:48.752426  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:48.752813  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:48.752843  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:48.752754  355356 retry.go:31] will retry after 1.330473754s: waiting for machine to come up
	I0108 21:25:50.085238  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:50.085653  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:50.085685  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:50.085597  355356 retry.go:31] will retry after 1.538329363s: waiting for machine to come up
	I0108 21:25:51.626479  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:51.627044  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:51.627071  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:51.626977  355356 retry.go:31] will retry after 2.318893397s: waiting for machine to come up
	I0108 21:25:53.947415  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:53.947833  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:53.947861  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:53.947782  355356 retry.go:31] will retry after 2.015902583s: waiting for machine to come up
	I0108 21:25:55.965110  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:55.965576  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:55.965607  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:55.965532  355356 retry.go:31] will retry after 2.209089644s: waiting for machine to come up
	I0108 21:25:58.177877  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:25:58.178161  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:25:58.178212  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:25:58.178120  355356 retry.go:31] will retry after 2.855938118s: waiting for machine to come up
	I0108 21:26:01.037550  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:01.038070  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:26:01.038114  355334 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:26:01.038002  355356 retry.go:31] will retry after 3.804169427s: waiting for machine to come up
	I0108 21:26:04.844715  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:04.845209  355334 main.go:141] libmachine: (multinode-962345) Found IP for machine: 192.168.39.239
	I0108 21:26:04.845231  355334 main.go:141] libmachine: (multinode-962345) Reserving static IP address...
	I0108 21:26:04.845242  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has current primary IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:04.845604  355334 main.go:141] libmachine: (multinode-962345) DBG | unable to find host DHCP lease matching {name: "multinode-962345", mac: "52:54:00:cf:54:bf", ip: "192.168.39.239"} in network mk-multinode-962345
	I0108 21:26:04.918386  355334 main.go:141] libmachine: (multinode-962345) DBG | Getting to WaitForSSH function...
	I0108 21:26:04.918420  355334 main.go:141] libmachine: (multinode-962345) Reserved static IP address: 192.168.39.239
	I0108 21:26:04.918435  355334 main.go:141] libmachine: (multinode-962345) Waiting for SSH to be available...
	I0108 21:26:04.921648  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:04.922060  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:04.922091  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:04.922320  355334 main.go:141] libmachine: (multinode-962345) DBG | Using SSH client type: external
	I0108 21:26:04.922349  355334 main.go:141] libmachine: (multinode-962345) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa (-rw-------)
	I0108 21:26:04.922378  355334 main.go:141] libmachine: (multinode-962345) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:26:04.922398  355334 main.go:141] libmachine: (multinode-962345) DBG | About to run SSH command:
	I0108 21:26:04.922417  355334 main.go:141] libmachine: (multinode-962345) DBG | exit 0
	I0108 21:26:05.011380  355334 main.go:141] libmachine: (multinode-962345) DBG | SSH cmd err, output: <nil>: 
	I0108 21:26:05.011706  355334 main.go:141] libmachine: (multinode-962345) KVM machine creation complete!
	I0108 21:26:05.011991  355334 main.go:141] libmachine: (multinode-962345) Calling .GetConfigRaw
	I0108 21:26:05.012571  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:05.012800  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:05.012953  355334 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 21:26:05.012965  355334 main.go:141] libmachine: (multinode-962345) Calling .GetState
	I0108 21:26:05.014270  355334 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 21:26:05.014289  355334 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 21:26:05.014305  355334 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 21:26:05.014328  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:05.016436  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.016782  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:05.016818  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.016981  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:05.017157  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.017285  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.017394  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:05.017593  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:26:05.017945  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:26:05.017957  355334 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 21:26:05.138562  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:26:05.138593  355334 main.go:141] libmachine: Detecting the provisioner...
	I0108 21:26:05.138606  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:05.141302  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.141650  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:05.141684  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.141903  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:05.142149  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.142394  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.142528  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:05.142693  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:26:05.143201  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:26:05.143219  355334 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 21:26:05.268002  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 21:26:05.268088  355334 main.go:141] libmachine: found compatible host: buildroot
	I0108 21:26:05.268102  355334 main.go:141] libmachine: Provisioning with buildroot...
	I0108 21:26:05.268112  355334 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:26:05.268368  355334 buildroot.go:166] provisioning hostname "multinode-962345"
	I0108 21:26:05.268404  355334 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:26:05.268617  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:05.271314  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.271669  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:05.271699  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.271876  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:05.272058  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.272236  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.272326  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:05.272494  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:26:05.272861  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:26:05.272875  355334 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-962345 && echo "multinode-962345" | sudo tee /etc/hostname
	I0108 21:26:05.403898  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-962345
	
	I0108 21:26:05.403942  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:05.406911  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.407189  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:05.407216  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.407408  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:05.407623  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.407806  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.407949  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:05.408094  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:26:05.408492  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:26:05.408518  355334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-962345' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-962345/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-962345' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:26:05.536547  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:26:05.536578  355334 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 21:26:05.536604  355334 buildroot.go:174] setting up certificates
	I0108 21:26:05.536615  355334 provision.go:83] configureAuth start
	I0108 21:26:05.536629  355334 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:26:05.536898  355334 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:26:05.539392  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.539723  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:05.539760  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.539907  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:05.542187  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.542530  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:05.542563  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.542699  355334 provision.go:138] copyHostCerts
	I0108 21:26:05.542742  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:26:05.542791  355334 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 21:26:05.542804  355334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:26:05.542869  355334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 21:26:05.542979  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:26:05.543004  355334 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 21:26:05.543011  355334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:26:05.543046  355334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 21:26:05.543178  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:26:05.543211  355334 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 21:26:05.543220  355334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:26:05.543252  355334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 21:26:05.543316  355334 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.multinode-962345 san=[192.168.39.239 192.168.39.239 localhost 127.0.0.1 minikube multinode-962345]
	I0108 21:26:05.888918  355334 provision.go:172] copyRemoteCerts
	I0108 21:26:05.888996  355334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:26:05.889021  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:05.891439  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.891757  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:05.891790  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:05.891931  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:05.892164  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:05.892341  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:05.892538  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:26:05.980687  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:26:05.980780  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:26:06.006504  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:26:06.006570  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 21:26:06.032048  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:26:06.032119  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:26:06.056862  355334 provision.go:86] duration metric: configureAuth took 520.233319ms
	I0108 21:26:06.056896  355334 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:26:06.057080  355334 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:26:06.057162  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:06.059721  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.060056  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:06.060100  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.060233  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:06.060441  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:06.060603  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:06.060706  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:06.060865  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:26:06.061201  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:26:06.061224  355334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:26:06.376505  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:26:06.376544  355334 main.go:141] libmachine: Checking connection to Docker...
	I0108 21:26:06.376554  355334 main.go:141] libmachine: (multinode-962345) Calling .GetURL
	I0108 21:26:06.377956  355334 main.go:141] libmachine: (multinode-962345) DBG | Using libvirt version 6000000
	I0108 21:26:06.380192  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.380574  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:06.380611  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.380761  355334 main.go:141] libmachine: Docker is up and running!
	I0108 21:26:06.380778  355334 main.go:141] libmachine: Reticulating splines...
	I0108 21:26:06.380785  355334 client.go:171] LocalClient.Create took 23.077371655s
	I0108 21:26:06.380812  355334 start.go:167] duration metric: libmachine.API.Create for "multinode-962345" took 23.077465935s
	I0108 21:26:06.380825  355334 start.go:300] post-start starting for "multinode-962345" (driver="kvm2")
	I0108 21:26:06.380840  355334 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:26:06.380863  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:06.381109  355334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:26:06.381142  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:06.383280  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.383680  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:06.383712  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.383831  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:06.384026  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:06.384182  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:06.384327  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:26:06.472956  355334 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:26:06.476971  355334 command_runner.go:130] > NAME=Buildroot
	I0108 21:26:06.477000  355334 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0108 21:26:06.477009  355334 command_runner.go:130] > ID=buildroot
	I0108 21:26:06.477017  355334 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:26:06.477026  355334 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:26:06.477066  355334 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:26:06.477082  355334 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 21:26:06.477137  355334 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 21:26:06.477254  355334 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 21:26:06.477271  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /etc/ssl/certs/3419822.pem
	I0108 21:26:06.477365  355334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:26:06.485541  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:26:06.508324  355334 start.go:303] post-start completed in 127.479185ms
	I0108 21:26:06.508391  355334 main.go:141] libmachine: (multinode-962345) Calling .GetConfigRaw
	I0108 21:26:06.508955  355334 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:26:06.512952  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.513303  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:06.513331  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.513578  355334 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:26:06.513745  355334 start.go:128] duration metric: createHost completed in 23.228101337s
	I0108 21:26:06.513770  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:06.515862  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.516175  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:06.516212  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.516324  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:06.516508  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:06.516703  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:06.516851  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:06.516995  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:26:06.517316  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:26:06.517328  355334 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:26:06.636227  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704749166.602958439
	
	I0108 21:26:06.636254  355334 fix.go:206] guest clock: 1704749166.602958439
	I0108 21:26:06.636262  355334 fix.go:219] Guest: 2024-01-08 21:26:06.602958439 +0000 UTC Remote: 2024-01-08 21:26:06.513756627 +0000 UTC m=+23.351764107 (delta=89.201812ms)
	I0108 21:26:06.636283  355334 fix.go:190] guest clock delta is within tolerance: 89.201812ms
	I0108 21:26:06.636287  355334 start.go:83] releasing machines lock for "multinode-962345", held for 23.350739156s
	I0108 21:26:06.636306  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:06.636592  355334 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:26:06.639314  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.639713  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:06.639746  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.639886  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:06.640385  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:06.640556  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:06.640666  355334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:26:06.640706  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:06.640814  355334 ssh_runner.go:195] Run: cat /version.json
	I0108 21:26:06.640842  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:06.643247  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.643629  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:06.643666  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.643688  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.643840  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:06.644037  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:06.644227  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:06.644275  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:06.644302  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:06.644388  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:26:06.644514  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:06.644659  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:06.644827  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:06.644978  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:26:06.751467  355334 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:26:06.752300  355334 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0108 21:26:06.752459  355334 ssh_runner.go:195] Run: systemctl --version
	I0108 21:26:06.758071  355334 command_runner.go:130] > systemd 247 (247)
	I0108 21:26:06.758117  355334 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 21:26:06.758417  355334 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:26:06.912579  355334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:26:06.918380  355334 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 21:26:06.918836  355334 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:26:06.918906  355334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:26:06.933164  355334 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:26:06.933229  355334 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:26:06.933242  355334 start.go:475] detecting cgroup driver to use...
	I0108 21:26:06.933320  355334 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:26:06.947112  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:26:06.959484  355334 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:26:06.959570  355334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:26:06.972454  355334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:26:06.984732  355334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:26:06.997541  355334 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 21:26:07.086509  355334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:26:07.209542  355334 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 21:26:07.209588  355334 docker.go:219] disabling docker service ...
	I0108 21:26:07.209650  355334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:26:07.223450  355334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:26:07.234456  355334 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 21:26:07.234536  355334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:26:07.345617  355334 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 21:26:07.345697  355334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:26:07.357388  355334 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 21:26:07.357784  355334 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 21:26:07.456096  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:26:07.468104  355334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:26:07.484244  355334 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 21:26:07.484713  355334 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:26:07.484778  355334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:26:07.493205  355334 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:26:07.493256  355334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:26:07.502256  355334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:26:07.510867  355334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:26:07.519637  355334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:26:07.528925  355334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:26:07.536424  355334 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:26:07.536692  355334 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:26:07.536749  355334 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 21:26:07.548562  355334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:26:07.557865  355334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:26:07.660719  355334 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:26:07.835543  355334 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:26:07.835630  355334 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:26:07.840644  355334 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 21:26:07.840670  355334 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:26:07.840680  355334 command_runner.go:130] > Device: 16h/22d	Inode: 736         Links: 1
	I0108 21:26:07.840691  355334 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:26:07.840699  355334 command_runner.go:130] > Access: 2024-01-08 21:26:07.788201118 +0000
	I0108 21:26:07.840709  355334 command_runner.go:130] > Modify: 2024-01-08 21:26:07.788201118 +0000
	I0108 21:26:07.840717  355334 command_runner.go:130] > Change: 2024-01-08 21:26:07.788201118 +0000
	I0108 21:26:07.840727  355334 command_runner.go:130] >  Birth: -
	I0108 21:26:07.841233  355334 start.go:543] Will wait 60s for crictl version
	I0108 21:26:07.841291  355334 ssh_runner.go:195] Run: which crictl
	I0108 21:26:07.845130  355334 command_runner.go:130] > /usr/bin/crictl
	I0108 21:26:07.845203  355334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:26:07.879541  355334 command_runner.go:130] > Version:  0.1.0
	I0108 21:26:07.879571  355334 command_runner.go:130] > RuntimeName:  cri-o
	I0108 21:26:07.879579  355334 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 21:26:07.879587  355334 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:26:07.879609  355334 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:26:07.879677  355334 ssh_runner.go:195] Run: crio --version
	I0108 21:26:07.923044  355334 command_runner.go:130] > crio version 1.24.1
	I0108 21:26:07.923070  355334 command_runner.go:130] > Version:          1.24.1
	I0108 21:26:07.923078  355334 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:26:07.923082  355334 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:26:07.923088  355334 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:26:07.923093  355334 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:26:07.923099  355334 command_runner.go:130] > Compiler:         gc
	I0108 21:26:07.923104  355334 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:26:07.923111  355334 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:26:07.923125  355334 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:26:07.923130  355334 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:26:07.923134  355334 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:26:07.924385  355334 ssh_runner.go:195] Run: crio --version
	I0108 21:26:07.966203  355334 command_runner.go:130] > crio version 1.24.1
	I0108 21:26:07.966229  355334 command_runner.go:130] > Version:          1.24.1
	I0108 21:26:07.966235  355334 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:26:07.966239  355334 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:26:07.966245  355334 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:26:07.966250  355334 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:26:07.966258  355334 command_runner.go:130] > Compiler:         gc
	I0108 21:26:07.966268  355334 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:26:07.966274  355334 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:26:07.966281  355334 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:26:07.966286  355334 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:26:07.966290  355334 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:26:07.969893  355334 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:26:07.971483  355334 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:26:07.973850  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:07.974183  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:07.974212  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:07.974407  355334 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:26:07.978473  355334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:26:07.990465  355334 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:26:07.990531  355334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:26:08.027921  355334 command_runner.go:130] > {
	I0108 21:26:08.027946  355334 command_runner.go:130] >   "images": [
	I0108 21:26:08.027952  355334 command_runner.go:130] >   ]
	I0108 21:26:08.027956  355334 command_runner.go:130] > }
	I0108 21:26:08.029052  355334 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 21:26:08.029117  355334 ssh_runner.go:195] Run: which lz4
	I0108 21:26:08.032811  355334 command_runner.go:130] > /usr/bin/lz4
	I0108 21:26:08.033064  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 21:26:08.033184  355334 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:26:08.037272  355334 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:26:08.037312  355334 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:26:08.037332  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 21:26:09.846885  355334 crio.go:444] Took 1.813726 seconds to copy over tarball
	I0108 21:26:09.846958  355334 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:26:12.541898  355334 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.694895034s)
	I0108 21:26:12.541953  355334 crio.go:451] Took 2.695039 seconds to extract the tarball
	I0108 21:26:12.541963  355334 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:26:12.582581  355334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:26:12.653518  355334 command_runner.go:130] > {
	I0108 21:26:12.653558  355334 command_runner.go:130] >   "images": [
	I0108 21:26:12.653566  355334 command_runner.go:130] >     {
	I0108 21:26:12.653596  355334 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 21:26:12.653606  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.653616  355334 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 21:26:12.653624  355334 command_runner.go:130] >       ],
	I0108 21:26:12.653637  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.653652  355334 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 21:26:12.653669  355334 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 21:26:12.653685  355334 command_runner.go:130] >       ],
	I0108 21:26:12.653697  355334 command_runner.go:130] >       "size": "65258016",
	I0108 21:26:12.653709  355334 command_runner.go:130] >       "uid": null,
	I0108 21:26:12.653717  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.653727  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.653745  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.653754  355334 command_runner.go:130] >     },
	I0108 21:26:12.653763  355334 command_runner.go:130] >     {
	I0108 21:26:12.653776  355334 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 21:26:12.653787  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.653799  355334 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 21:26:12.653839  355334 command_runner.go:130] >       ],
	I0108 21:26:12.653857  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.653871  355334 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 21:26:12.653885  355334 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 21:26:12.653895  355334 command_runner.go:130] >       ],
	I0108 21:26:12.653913  355334 command_runner.go:130] >       "size": "31470524",
	I0108 21:26:12.653924  355334 command_runner.go:130] >       "uid": null,
	I0108 21:26:12.653934  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.653946  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.653956  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.653964  355334 command_runner.go:130] >     },
	I0108 21:26:12.653973  355334 command_runner.go:130] >     {
	I0108 21:26:12.653985  355334 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 21:26:12.653997  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.654011  355334 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 21:26:12.654021  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654038  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.654055  355334 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 21:26:12.654077  355334 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 21:26:12.654088  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654097  355334 command_runner.go:130] >       "size": "53621675",
	I0108 21:26:12.654108  355334 command_runner.go:130] >       "uid": null,
	I0108 21:26:12.654118  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.654129  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.654139  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.654149  355334 command_runner.go:130] >     },
	I0108 21:26:12.654156  355334 command_runner.go:130] >     {
	I0108 21:26:12.654167  355334 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 21:26:12.654179  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.654190  355334 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 21:26:12.654200  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654208  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.654225  355334 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 21:26:12.654241  355334 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 21:26:12.654262  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654274  355334 command_runner.go:130] >       "size": "295456551",
	I0108 21:26:12.654288  355334 command_runner.go:130] >       "uid": {
	I0108 21:26:12.654299  355334 command_runner.go:130] >         "value": "0"
	I0108 21:26:12.654309  355334 command_runner.go:130] >       },
	I0108 21:26:12.654318  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.654358  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.654370  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.654377  355334 command_runner.go:130] >     },
	I0108 21:26:12.654384  355334 command_runner.go:130] >     {
	I0108 21:26:12.654396  355334 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 21:26:12.654407  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.654419  355334 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 21:26:12.654429  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654438  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.654454  355334 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 21:26:12.654471  355334 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 21:26:12.654480  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654491  355334 command_runner.go:130] >       "size": "127226832",
	I0108 21:26:12.654500  355334 command_runner.go:130] >       "uid": {
	I0108 21:26:12.654516  355334 command_runner.go:130] >         "value": "0"
	I0108 21:26:12.654527  355334 command_runner.go:130] >       },
	I0108 21:26:12.654536  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.654547  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.654557  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.654565  355334 command_runner.go:130] >     },
	I0108 21:26:12.654575  355334 command_runner.go:130] >     {
	I0108 21:26:12.654587  355334 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 21:26:12.654598  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.654610  355334 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 21:26:12.654620  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654630  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.654647  355334 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 21:26:12.654664  355334 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 21:26:12.654674  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654683  355334 command_runner.go:130] >       "size": "123261750",
	I0108 21:26:12.654693  355334 command_runner.go:130] >       "uid": {
	I0108 21:26:12.654702  355334 command_runner.go:130] >         "value": "0"
	I0108 21:26:12.654716  355334 command_runner.go:130] >       },
	I0108 21:26:12.654728  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.654739  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.654746  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.654753  355334 command_runner.go:130] >     },
	I0108 21:26:12.654763  355334 command_runner.go:130] >     {
	I0108 21:26:12.654776  355334 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 21:26:12.654786  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.654796  355334 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 21:26:12.654811  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654822  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.654836  355334 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 21:26:12.654852  355334 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 21:26:12.654862  355334 command_runner.go:130] >       ],
	I0108 21:26:12.654871  355334 command_runner.go:130] >       "size": "74749335",
	I0108 21:26:12.654882  355334 command_runner.go:130] >       "uid": null,
	I0108 21:26:12.654891  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.654902  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.654920  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.654935  355334 command_runner.go:130] >     },
	I0108 21:26:12.654945  355334 command_runner.go:130] >     {
	I0108 21:26:12.654957  355334 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 21:26:12.654976  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.654989  355334 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 21:26:12.655000  355334 command_runner.go:130] >       ],
	I0108 21:26:12.655010  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.655048  355334 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 21:26:12.655064  355334 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 21:26:12.655072  355334 command_runner.go:130] >       ],
	I0108 21:26:12.655083  355334 command_runner.go:130] >       "size": "61551410",
	I0108 21:26:12.655092  355334 command_runner.go:130] >       "uid": {
	I0108 21:26:12.655107  355334 command_runner.go:130] >         "value": "0"
	I0108 21:26:12.655117  355334 command_runner.go:130] >       },
	I0108 21:26:12.655126  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.655137  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.655147  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.655157  355334 command_runner.go:130] >     },
	I0108 21:26:12.655168  355334 command_runner.go:130] >     {
	I0108 21:26:12.655180  355334 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 21:26:12.655191  355334 command_runner.go:130] >       "repoTags": [
	I0108 21:26:12.655202  355334 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 21:26:12.655213  355334 command_runner.go:130] >       ],
	I0108 21:26:12.655221  355334 command_runner.go:130] >       "repoDigests": [
	I0108 21:26:12.655238  355334 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 21:26:12.655254  355334 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 21:26:12.655263  355334 command_runner.go:130] >       ],
	I0108 21:26:12.655273  355334 command_runner.go:130] >       "size": "750414",
	I0108 21:26:12.655282  355334 command_runner.go:130] >       "uid": {
	I0108 21:26:12.655293  355334 command_runner.go:130] >         "value": "65535"
	I0108 21:26:12.655301  355334 command_runner.go:130] >       },
	I0108 21:26:12.655316  355334 command_runner.go:130] >       "username": "",
	I0108 21:26:12.655333  355334 command_runner.go:130] >       "spec": null,
	I0108 21:26:12.655344  355334 command_runner.go:130] >       "pinned": false
	I0108 21:26:12.655351  355334 command_runner.go:130] >     }
	I0108 21:26:12.655379  355334 command_runner.go:130] >   ]
	I0108 21:26:12.655389  355334 command_runner.go:130] > }
	I0108 21:26:12.655561  355334 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:26:12.655577  355334 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:26:12.655667  355334 ssh_runner.go:195] Run: crio config
	I0108 21:26:12.702998  355334 command_runner.go:130] ! time="2024-01-08 21:26:12.679647153Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 21:26:12.703321  355334 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 21:26:12.708046  355334 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 21:26:12.708085  355334 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 21:26:12.708099  355334 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 21:26:12.708105  355334 command_runner.go:130] > #
	I0108 21:26:12.708116  355334 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 21:26:12.708125  355334 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 21:26:12.708133  355334 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 21:26:12.708143  355334 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 21:26:12.708149  355334 command_runner.go:130] > # reload'.
	I0108 21:26:12.708164  355334 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 21:26:12.708176  355334 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 21:26:12.708187  355334 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 21:26:12.708202  355334 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 21:26:12.708209  355334 command_runner.go:130] > [crio]
	I0108 21:26:12.708223  355334 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 21:26:12.708235  355334 command_runner.go:130] > # containers images, in this directory.
	I0108 21:26:12.708244  355334 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 21:26:12.708264  355334 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 21:26:12.708275  355334 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 21:26:12.708288  355334 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 21:26:12.708302  355334 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 21:26:12.708313  355334 command_runner.go:130] > storage_driver = "overlay"
	I0108 21:26:12.708327  355334 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 21:26:12.708340  355334 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 21:26:12.708349  355334 command_runner.go:130] > storage_option = [
	I0108 21:26:12.708357  355334 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 21:26:12.708362  355334 command_runner.go:130] > ]
	I0108 21:26:12.708381  355334 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 21:26:12.708396  355334 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 21:26:12.708407  355334 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 21:26:12.708426  355334 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 21:26:12.708444  355334 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 21:26:12.708456  355334 command_runner.go:130] > # always happen on a node reboot
	I0108 21:26:12.708465  355334 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 21:26:12.708478  355334 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 21:26:12.708489  355334 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 21:26:12.708511  355334 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 21:26:12.708523  355334 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 21:26:12.708540  355334 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 21:26:12.708556  355334 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 21:26:12.708566  355334 command_runner.go:130] > # internal_wipe = true
	I0108 21:26:12.708576  355334 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 21:26:12.708590  355334 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 21:26:12.708602  355334 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 21:26:12.708615  355334 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 21:26:12.708632  355334 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 21:26:12.708642  355334 command_runner.go:130] > [crio.api]
	I0108 21:26:12.708652  355334 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 21:26:12.708664  355334 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 21:26:12.708674  355334 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 21:26:12.708685  355334 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 21:26:12.708698  355334 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 21:26:12.708711  355334 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 21:26:12.708721  355334 command_runner.go:130] > # stream_port = "0"
	I0108 21:26:12.708734  355334 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 21:26:12.708742  355334 command_runner.go:130] > # stream_enable_tls = false
	I0108 21:26:12.708756  355334 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 21:26:12.708766  355334 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 21:26:12.708777  355334 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 21:26:12.708790  355334 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 21:26:12.708800  355334 command_runner.go:130] > # minutes.
	I0108 21:26:12.708808  355334 command_runner.go:130] > # stream_tls_cert = ""
	I0108 21:26:12.708822  355334 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 21:26:12.708838  355334 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 21:26:12.708854  355334 command_runner.go:130] > # stream_tls_key = ""
	I0108 21:26:12.708867  355334 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 21:26:12.708881  355334 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 21:26:12.708894  355334 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 21:26:12.708903  355334 command_runner.go:130] > # stream_tls_ca = ""
	I0108 21:26:12.708917  355334 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:26:12.708928  355334 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 21:26:12.708941  355334 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:26:12.708952  355334 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 21:26:12.708984  355334 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 21:26:12.708997  355334 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 21:26:12.709003  355334 command_runner.go:130] > [crio.runtime]
	I0108 21:26:12.709013  355334 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 21:26:12.709024  355334 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 21:26:12.709035  355334 command_runner.go:130] > # "nofile=1024:2048"
	I0108 21:26:12.709049  355334 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 21:26:12.709060  355334 command_runner.go:130] > # default_ulimits = [
	I0108 21:26:12.709079  355334 command_runner.go:130] > # ]
	I0108 21:26:12.709093  355334 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 21:26:12.709100  355334 command_runner.go:130] > # no_pivot = false
	I0108 21:26:12.709113  355334 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 21:26:12.709127  355334 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 21:26:12.709139  355334 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 21:26:12.709153  355334 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 21:26:12.709165  355334 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 21:26:12.709179  355334 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:26:12.709188  355334 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 21:26:12.709198  355334 command_runner.go:130] > # Cgroup setting for conmon
	I0108 21:26:12.709211  355334 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 21:26:12.709222  355334 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 21:26:12.709237  355334 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 21:26:12.709249  355334 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 21:26:12.709264  355334 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:26:12.709272  355334 command_runner.go:130] > conmon_env = [
	I0108 21:26:12.709286  355334 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 21:26:12.709301  355334 command_runner.go:130] > ]
	I0108 21:26:12.709314  355334 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 21:26:12.709326  355334 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 21:26:12.709340  355334 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 21:26:12.709349  355334 command_runner.go:130] > # default_env = [
	I0108 21:26:12.709355  355334 command_runner.go:130] > # ]
	I0108 21:26:12.709365  355334 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 21:26:12.709375  355334 command_runner.go:130] > # selinux = false
	I0108 21:26:12.709387  355334 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 21:26:12.709401  355334 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 21:26:12.709415  355334 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 21:26:12.709425  355334 command_runner.go:130] > # seccomp_profile = ""
	I0108 21:26:12.709438  355334 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 21:26:12.709449  355334 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 21:26:12.709462  355334 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 21:26:12.709474  355334 command_runner.go:130] > # which might increase security.
	I0108 21:26:12.709485  355334 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 21:26:12.709499  355334 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 21:26:12.709517  355334 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 21:26:12.709531  355334 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 21:26:12.709544  355334 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 21:26:12.709556  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:26:12.709567  355334 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 21:26:12.709580  355334 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 21:26:12.709591  355334 command_runner.go:130] > # the cgroup blockio controller.
	I0108 21:26:12.709602  355334 command_runner.go:130] > # blockio_config_file = ""
	I0108 21:26:12.709616  355334 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 21:26:12.709627  355334 command_runner.go:130] > # irqbalance daemon.
	I0108 21:26:12.709640  355334 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 21:26:12.709655  355334 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 21:26:12.709667  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:26:12.709677  355334 command_runner.go:130] > # rdt_config_file = ""
	I0108 21:26:12.709690  355334 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 21:26:12.709701  355334 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 21:26:12.709712  355334 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 21:26:12.709723  355334 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 21:26:12.709738  355334 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 21:26:12.709752  355334 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 21:26:12.709788  355334 command_runner.go:130] > # will be added.
	I0108 21:26:12.709806  355334 command_runner.go:130] > # default_capabilities = [
	I0108 21:26:12.709812  355334 command_runner.go:130] > # 	"CHOWN",
	I0108 21:26:12.709819  355334 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 21:26:12.709827  355334 command_runner.go:130] > # 	"FSETID",
	I0108 21:26:12.709838  355334 command_runner.go:130] > # 	"FOWNER",
	I0108 21:26:12.709854  355334 command_runner.go:130] > # 	"SETGID",
	I0108 21:26:12.709864  355334 command_runner.go:130] > # 	"SETUID",
	I0108 21:26:12.709873  355334 command_runner.go:130] > # 	"SETPCAP",
	I0108 21:26:12.709884  355334 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 21:26:12.709891  355334 command_runner.go:130] > # 	"KILL",
	I0108 21:26:12.709898  355334 command_runner.go:130] > # ]
	I0108 21:26:12.709908  355334 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 21:26:12.709922  355334 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:26:12.709933  355334 command_runner.go:130] > # default_sysctls = [
	I0108 21:26:12.709941  355334 command_runner.go:130] > # ]
	I0108 21:26:12.709954  355334 command_runner.go:130] > # List of devices on the host that a
	I0108 21:26:12.709969  355334 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 21:26:12.709979  355334 command_runner.go:130] > # allowed_devices = [
	I0108 21:26:12.709989  355334 command_runner.go:130] > # 	"/dev/fuse",
	I0108 21:26:12.709997  355334 command_runner.go:130] > # ]
	I0108 21:26:12.710007  355334 command_runner.go:130] > # List of additional devices. specified as
	I0108 21:26:12.710027  355334 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 21:26:12.710039  355334 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 21:26:12.710084  355334 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:26:12.710094  355334 command_runner.go:130] > # additional_devices = [
	I0108 21:26:12.710101  355334 command_runner.go:130] > # ]
	I0108 21:26:12.710114  355334 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 21:26:12.710125  355334 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 21:26:12.710135  355334 command_runner.go:130] > # 	"/etc/cdi",
	I0108 21:26:12.710142  355334 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 21:26:12.710151  355334 command_runner.go:130] > # ]
	I0108 21:26:12.710168  355334 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 21:26:12.710181  355334 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 21:26:12.710193  355334 command_runner.go:130] > # Defaults to false.
	I0108 21:26:12.710206  355334 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 21:26:12.710220  355334 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 21:26:12.710234  355334 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 21:26:12.710244  355334 command_runner.go:130] > # hooks_dir = [
	I0108 21:26:12.710254  355334 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 21:26:12.710262  355334 command_runner.go:130] > # ]
	I0108 21:26:12.710273  355334 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 21:26:12.710287  355334 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 21:26:12.710299  355334 command_runner.go:130] > # its default mounts from the following two files:
	I0108 21:26:12.710308  355334 command_runner.go:130] > #
	I0108 21:26:12.710320  355334 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 21:26:12.710333  355334 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 21:26:12.710346  355334 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 21:26:12.710357  355334 command_runner.go:130] > #
	I0108 21:26:12.710371  355334 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 21:26:12.710386  355334 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 21:26:12.710400  355334 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 21:26:12.710416  355334 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 21:26:12.710424  355334 command_runner.go:130] > #
	I0108 21:26:12.710432  355334 command_runner.go:130] > # default_mounts_file = ""
	I0108 21:26:12.710445  355334 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 21:26:12.710460  355334 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 21:26:12.710470  355334 command_runner.go:130] > pids_limit = 1024
	I0108 21:26:12.710484  355334 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 21:26:12.710498  355334 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 21:26:12.710512  355334 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 21:26:12.710528  355334 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 21:26:12.710539  355334 command_runner.go:130] > # log_size_max = -1
	I0108 21:26:12.710552  355334 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 21:26:12.710563  355334 command_runner.go:130] > # log_to_journald = false
	I0108 21:26:12.710575  355334 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 21:26:12.710587  355334 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 21:26:12.710596  355334 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 21:26:12.710609  355334 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 21:26:12.710621  355334 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 21:26:12.710637  355334 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 21:26:12.710650  355334 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 21:26:12.710660  355334 command_runner.go:130] > # read_only = false
	I0108 21:26:12.710674  355334 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 21:26:12.710686  355334 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 21:26:12.710696  355334 command_runner.go:130] > # live configuration reload.
	I0108 21:26:12.710705  355334 command_runner.go:130] > # log_level = "info"
	I0108 21:26:12.710718  355334 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 21:26:12.710730  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:26:12.710741  355334 command_runner.go:130] > # log_filter = ""
	I0108 21:26:12.710755  355334 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 21:26:12.710768  355334 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 21:26:12.710779  355334 command_runner.go:130] > # separated by comma.
	I0108 21:26:12.710789  355334 command_runner.go:130] > # uid_mappings = ""
	I0108 21:26:12.710800  355334 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 21:26:12.710814  355334 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 21:26:12.710825  355334 command_runner.go:130] > # separated by comma.
	I0108 21:26:12.710834  355334 command_runner.go:130] > # gid_mappings = ""
	I0108 21:26:12.710855  355334 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 21:26:12.710869  355334 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:26:12.710883  355334 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:26:12.710894  355334 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 21:26:12.710905  355334 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 21:26:12.710919  355334 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:26:12.710932  355334 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:26:12.710943  355334 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 21:26:12.710957  355334 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 21:26:12.710970  355334 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 21:26:12.710983  355334 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 21:26:12.710994  355334 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 21:26:12.711005  355334 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 21:26:12.711018  355334 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 21:26:12.711029  355334 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 21:26:12.711041  355334 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 21:26:12.711052  355334 command_runner.go:130] > drop_infra_ctr = false
	I0108 21:26:12.711066  355334 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 21:26:12.711083  355334 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 21:26:12.711098  355334 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 21:26:12.711109  355334 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 21:26:12.711120  355334 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 21:26:12.711132  355334 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 21:26:12.711143  355334 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 21:26:12.711158  355334 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 21:26:12.711265  355334 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 21:26:12.711285  355334 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 21:26:12.711302  355334 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 21:26:12.711317  355334 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 21:26:12.711328  355334 command_runner.go:130] > # default_runtime = "runc"
	I0108 21:26:12.711340  355334 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 21:26:12.711354  355334 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 21:26:12.711399  355334 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 21:26:12.711412  355334 command_runner.go:130] > # creation as a file is not desired either.
	I0108 21:26:12.711428  355334 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 21:26:12.711438  355334 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 21:26:12.711451  355334 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 21:26:12.711461  355334 command_runner.go:130] > # ]
	I0108 21:26:12.711473  355334 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 21:26:12.711487  355334 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 21:26:12.711505  355334 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 21:26:12.711519  355334 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 21:26:12.711528  355334 command_runner.go:130] > #
	I0108 21:26:12.711537  355334 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 21:26:12.711549  355334 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 21:26:12.711561  355334 command_runner.go:130] > #  runtime_type = "oci"
	I0108 21:26:12.711573  355334 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 21:26:12.711583  355334 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 21:26:12.711593  355334 command_runner.go:130] > #  allowed_annotations = []
	I0108 21:26:12.711601  355334 command_runner.go:130] > # Where:
	I0108 21:26:12.711611  355334 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 21:26:12.711625  355334 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 21:26:12.711639  355334 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 21:26:12.711654  355334 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 21:26:12.711673  355334 command_runner.go:130] > #   in $PATH.
	I0108 21:26:12.711688  355334 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 21:26:12.711699  355334 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 21:26:12.711713  355334 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 21:26:12.711720  355334 command_runner.go:130] > #   state.
	I0108 21:26:12.711734  355334 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 21:26:12.711826  355334 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 21:26:12.711852  355334 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 21:26:12.711878  355334 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 21:26:12.711893  355334 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 21:26:12.711907  355334 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 21:26:12.711920  355334 command_runner.go:130] > #   The currently recognized values are:
	I0108 21:26:12.711935  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 21:26:12.711952  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 21:26:12.711965  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 21:26:12.711979  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 21:26:12.711995  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 21:26:12.712009  355334 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 21:26:12.712025  355334 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 21:26:12.712111  355334 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 21:26:12.712131  355334 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 21:26:12.712141  355334 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 21:26:12.712153  355334 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 21:26:12.712161  355334 command_runner.go:130] > runtime_type = "oci"
	I0108 21:26:12.712172  355334 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 21:26:12.712182  355334 command_runner.go:130] > runtime_config_path = ""
	I0108 21:26:12.712190  355334 command_runner.go:130] > monitor_path = ""
	I0108 21:26:12.712201  355334 command_runner.go:130] > monitor_cgroup = ""
	I0108 21:26:12.712209  355334 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 21:26:12.712230  355334 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 21:26:12.712241  355334 command_runner.go:130] > # running containers
	I0108 21:26:12.712252  355334 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 21:26:12.712265  355334 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 21:26:12.712339  355334 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 21:26:12.712353  355334 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 21:26:12.712362  355334 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 21:26:12.712375  355334 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 21:26:12.712387  355334 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 21:26:12.712397  355334 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 21:26:12.712409  355334 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 21:26:12.712420  355334 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 21:26:12.712433  355334 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 21:26:12.712445  355334 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 21:26:12.712459  355334 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 21:26:12.712476  355334 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 21:26:12.712492  355334 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 21:26:12.712505  355334 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 21:26:12.712524  355334 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 21:26:12.712546  355334 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 21:26:12.712560  355334 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 21:26:12.712575  355334 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 21:26:12.712585  355334 command_runner.go:130] > # Example:
	I0108 21:26:12.712594  355334 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 21:26:12.712605  355334 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 21:26:12.712620  355334 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 21:26:12.712632  355334 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 21:26:12.712642  355334 command_runner.go:130] > # cpuset = 0
	I0108 21:26:12.712650  355334 command_runner.go:130] > # cpushares = "0-1"
	I0108 21:26:12.712660  355334 command_runner.go:130] > # Where:
	I0108 21:26:12.712673  355334 command_runner.go:130] > # The workload name is workload-type.
	I0108 21:26:12.712693  355334 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 21:26:12.712705  355334 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 21:26:12.712716  355334 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 21:26:12.712733  355334 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 21:26:12.712746  355334 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 21:26:12.712754  355334 command_runner.go:130] > # 
	I0108 21:26:12.712768  355334 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 21:26:12.712777  355334 command_runner.go:130] > #
	I0108 21:26:12.712788  355334 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 21:26:12.712802  355334 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 21:26:12.712815  355334 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 21:26:12.712829  355334 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 21:26:12.712846  355334 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 21:26:12.712856  355334 command_runner.go:130] > [crio.image]
	I0108 21:26:12.712867  355334 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 21:26:12.712891  355334 command_runner.go:130] > # default_transport = "docker://"
	I0108 21:26:12.712905  355334 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 21:26:12.712916  355334 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:26:12.712927  355334 command_runner.go:130] > # global_auth_file = ""
	I0108 21:26:12.712939  355334 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 21:26:12.712950  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:26:12.712957  355334 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 21:26:12.712966  355334 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 21:26:12.712974  355334 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:26:12.712981  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:26:12.712988  355334 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 21:26:12.712997  355334 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 21:26:12.713010  355334 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 21:26:12.713021  355334 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 21:26:12.713033  355334 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 21:26:12.713044  355334 command_runner.go:130] > # pause_command = "/pause"
	I0108 21:26:12.713053  355334 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 21:26:12.713064  355334 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 21:26:12.713079  355334 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 21:26:12.713093  355334 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 21:26:12.713105  355334 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 21:26:12.713116  355334 command_runner.go:130] > # signature_policy = ""
	I0108 21:26:12.713127  355334 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 21:26:12.713141  355334 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 21:26:12.713150  355334 command_runner.go:130] > # changing them here.
	I0108 21:26:12.713161  355334 command_runner.go:130] > # insecure_registries = [
	I0108 21:26:12.713168  355334 command_runner.go:130] > # ]
	I0108 21:26:12.713183  355334 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 21:26:12.713196  355334 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 21:26:12.713206  355334 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 21:26:12.713219  355334 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 21:26:12.713230  355334 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 21:26:12.713244  355334 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 21:26:12.713258  355334 command_runner.go:130] > # CNI plugins.
	I0108 21:26:12.713268  355334 command_runner.go:130] > [crio.network]
	I0108 21:26:12.713279  355334 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 21:26:12.713292  355334 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 21:26:12.713301  355334 command_runner.go:130] > # cni_default_network = ""
	I0108 21:26:12.713312  355334 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 21:26:12.713328  355334 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 21:26:12.713341  355334 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 21:26:12.713351  355334 command_runner.go:130] > # plugin_dirs = [
	I0108 21:26:12.713360  355334 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 21:26:12.713369  355334 command_runner.go:130] > # ]
	I0108 21:26:12.713380  355334 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 21:26:12.713389  355334 command_runner.go:130] > [crio.metrics]
	I0108 21:26:12.713397  355334 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 21:26:12.713404  355334 command_runner.go:130] > enable_metrics = true
	I0108 21:26:12.713411  355334 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 21:26:12.713419  355334 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 21:26:12.713429  355334 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 21:26:12.713447  355334 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 21:26:12.713460  355334 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 21:26:12.713471  355334 command_runner.go:130] > # metrics_collectors = [
	I0108 21:26:12.713479  355334 command_runner.go:130] > # 	"operations",
	I0108 21:26:12.713490  355334 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 21:26:12.713501  355334 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 21:26:12.713509  355334 command_runner.go:130] > # 	"operations_errors",
	I0108 21:26:12.713519  355334 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 21:26:12.713528  355334 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 21:26:12.713539  355334 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 21:26:12.713556  355334 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 21:26:12.713566  355334 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 21:26:12.713577  355334 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 21:26:12.713587  355334 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 21:26:12.713595  355334 command_runner.go:130] > # 	"containers_oom_total",
	I0108 21:26:12.713605  355334 command_runner.go:130] > # 	"containers_oom",
	I0108 21:26:12.713616  355334 command_runner.go:130] > # 	"processes_defunct",
	I0108 21:26:12.713625  355334 command_runner.go:130] > # 	"operations_total",
	I0108 21:26:12.713640  355334 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 21:26:12.713651  355334 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 21:26:12.713663  355334 command_runner.go:130] > # 	"operations_errors_total",
	I0108 21:26:12.713672  355334 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 21:26:12.713683  355334 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 21:26:12.713694  355334 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 21:26:12.713703  355334 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 21:26:12.713714  355334 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 21:26:12.713736  355334 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 21:26:12.713745  355334 command_runner.go:130] > # ]
	I0108 21:26:12.713755  355334 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 21:26:12.713765  355334 command_runner.go:130] > # metrics_port = 9090
	I0108 21:26:12.713778  355334 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 21:26:12.713789  355334 command_runner.go:130] > # metrics_socket = ""
	I0108 21:26:12.713801  355334 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 21:26:12.713815  355334 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 21:26:12.713828  355334 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 21:26:12.713844  355334 command_runner.go:130] > # certificate on any modification event.
	I0108 21:26:12.713857  355334 command_runner.go:130] > # metrics_cert = ""
	I0108 21:26:12.713870  355334 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 21:26:12.713888  355334 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 21:26:12.713899  355334 command_runner.go:130] > # metrics_key = ""
	I0108 21:26:12.713911  355334 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 21:26:12.713918  355334 command_runner.go:130] > [crio.tracing]
	I0108 21:26:12.713931  355334 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 21:26:12.713942  355334 command_runner.go:130] > # enable_tracing = false
	I0108 21:26:12.713954  355334 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 21:26:12.713966  355334 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 21:26:12.713979  355334 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 21:26:12.713990  355334 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 21:26:12.714003  355334 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 21:26:12.714012  355334 command_runner.go:130] > [crio.stats]
	I0108 21:26:12.714023  355334 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 21:26:12.714035  355334 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 21:26:12.714046  355334 command_runner.go:130] > # stats_collection_period = 0
	I0108 21:26:12.714168  355334 cni.go:84] Creating CNI manager for ""
	I0108 21:26:12.714189  355334 cni.go:136] 1 nodes found, recommending kindnet
	I0108 21:26:12.714213  355334 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:26:12.714249  355334 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-962345 NodeName:multinode-962345 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:26:12.714560  355334 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-962345"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:26:12.714656  355334 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-962345 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:26:12.714728  355334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:26:12.723918  355334 command_runner.go:130] > kubeadm
	I0108 21:26:12.723935  355334 command_runner.go:130] > kubectl
	I0108 21:26:12.723941  355334 command_runner.go:130] > kubelet
	I0108 21:26:12.723966  355334 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:26:12.724019  355334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:26:12.734811  355334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0108 21:26:12.751494  355334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:26:12.767974  355334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0108 21:26:12.784515  355334 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0108 21:26:12.788251  355334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:26:12.799985  355334 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345 for IP: 192.168.39.239
	I0108 21:26:12.800019  355334 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:12.800236  355334 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 21:26:12.800291  355334 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 21:26:12.800339  355334 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key
	I0108 21:26:12.800352  355334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt with IP's: []
	I0108 21:26:12.997190  355334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt ...
	I0108 21:26:12.997225  355334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt: {Name:mk12cf2ef16d64c27f33311a05731067f80bdaae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:12.997400  355334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key ...
	I0108 21:26:12.997417  355334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key: {Name:mkb1ddb8f4c20836f543ea005ae6ef87467f6cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:12.997491  355334 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key.4bd9216f
	I0108 21:26:12.997509  355334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt.4bd9216f with IP's: [192.168.39.239 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:26:13.089410  355334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt.4bd9216f ...
	I0108 21:26:13.089441  355334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt.4bd9216f: {Name:mkd3e6344c75270e0277686bc8621c95e4dbf03d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:13.089593  355334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key.4bd9216f ...
	I0108 21:26:13.089606  355334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key.4bd9216f: {Name:mkc09d1cd71dfa1738b5d428fa9528dfcc687c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:13.089669  355334 certs.go:337] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt.4bd9216f -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt
	I0108 21:26:13.089754  355334 certs.go:341] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key.4bd9216f -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key
	I0108 21:26:13.089819  355334 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.key
	I0108 21:26:13.089836  355334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.crt with IP's: []
	I0108 21:26:13.140608  355334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.crt ...
	I0108 21:26:13.140637  355334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.crt: {Name:mk0c66b3bd38dbc0cdf750c61471a741c51cc258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:13.140789  355334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.key ...
	I0108 21:26:13.140801  355334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.key: {Name:mk56924babeb4ef3d4b3e0c9bbac275a08f8e0c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:13.140862  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 21:26:13.140880  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 21:26:13.140890  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 21:26:13.140903  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 21:26:13.140914  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:26:13.140933  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:26:13.140946  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:26:13.140963  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:26:13.141014  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 21:26:13.141060  355334 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 21:26:13.141068  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:26:13.141093  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:26:13.141117  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:26:13.141145  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 21:26:13.141186  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:26:13.141219  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:13.141232  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem -> /usr/share/ca-certificates/341982.pem
	I0108 21:26:13.141241  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /usr/share/ca-certificates/3419822.pem
	I0108 21:26:13.141842  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:26:13.165726  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:26:13.188232  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:26:13.209899  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:26:13.231600  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:26:13.334725  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:26:13.356853  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:26:13.378446  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:26:13.399917  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:26:13.421127  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 21:26:13.442839  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 21:26:13.464462  355334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:26:13.480228  355334 ssh_runner.go:195] Run: openssl version
	I0108 21:26:13.485541  355334 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:26:13.485643  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:26:13.496153  355334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:13.500513  355334 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:13.500703  355334 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:13.500776  355334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:26:13.505932  355334 command_runner.go:130] > b5213941
	I0108 21:26:13.506175  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:26:13.516585  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 21:26:13.526752  355334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 21:26:13.530929  355334 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:26:13.531229  355334 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:26:13.531297  355334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 21:26:13.536623  355334 command_runner.go:130] > 51391683
	I0108 21:26:13.536746  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 21:26:13.546654  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 21:26:13.556606  355334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 21:26:13.560789  355334 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:26:13.560822  355334 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:26:13.560856  355334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 21:26:13.565966  355334 command_runner.go:130] > 3ec20f2e
	I0108 21:26:13.566024  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:26:13.576099  355334 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:26:13.580031  355334 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:26:13.580072  355334 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:26:13.580126  355334 kubeadm.go:404] StartCluster: {Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:26:13.580231  355334 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:26:13.580275  355334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:26:13.617043  355334 cri.go:89] found id: ""
	I0108 21:26:13.617117  355334 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:26:13.626323  355334 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 21:26:13.626349  355334 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 21:26:13.626355  355334 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 21:26:13.626613  355334 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:26:13.635608  355334 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:26:13.644751  355334 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 21:26:13.644774  355334 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 21:26:13.644781  355334 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 21:26:13.644791  355334 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:26:13.644822  355334 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:26:13.644849  355334 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 21:26:13.758989  355334 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 21:26:13.759035  355334 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 21:26:13.759141  355334 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:26:13.759169  355334 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:26:14.004491  355334 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:26:14.004529  355334 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:26:14.004662  355334 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:26:14.004694  355334 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:26:14.004807  355334 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:26:14.004817  355334 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:26:14.247722  355334 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:26:14.247846  355334 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:26:14.372846  355334 out.go:204]   - Generating certificates and keys ...
	I0108 21:26:14.372978  355334 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 21:26:14.372992  355334 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:26:14.373058  355334 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 21:26:14.373069  355334 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:26:14.373144  355334 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:26:14.373153  355334 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:26:14.540539  355334 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:26:14.540552  355334 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:26:14.684557  355334 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:26:14.684591  355334 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 21:26:14.816382  355334 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:26:14.816420  355334 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 21:26:15.332956  355334 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:26:15.332998  355334 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 21:26:15.333139  355334 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-962345] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0108 21:26:15.333155  355334 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-962345] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0108 21:26:15.527789  355334 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:26:15.527873  355334 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 21:26:15.528186  355334 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-962345] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0108 21:26:15.528209  355334 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-962345] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0108 21:26:15.857536  355334 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:26:15.857572  355334 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:26:15.933343  355334 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:26:15.933375  355334 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:26:16.281671  355334 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:26:16.281720  355334 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 21:26:16.281931  355334 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:26:16.281969  355334 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:26:16.548717  355334 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:26:16.548752  355334 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:26:16.610802  355334 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:26:16.610835  355334 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:26:16.809197  355334 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:26:16.809239  355334 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:26:17.150766  355334 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:26:17.150801  355334 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:26:17.151318  355334 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:26:17.151329  355334 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:26:17.154907  355334 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:26:17.157183  355334 out.go:204]   - Booting up control plane ...
	I0108 21:26:17.154990  355334 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:26:17.157301  355334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:26:17.157320  355334 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:26:17.157393  355334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:26:17.157402  355334 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:26:17.158776  355334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:26:17.158805  355334 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:26:17.175583  355334 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:26:17.175622  355334 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:26:17.176281  355334 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:26:17.176302  355334 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:26:17.176415  355334 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:26:17.176445  355334 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:26:17.294044  355334 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:26:17.294086  355334 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:26:25.291569  355334 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002044 seconds
	I0108 21:26:25.291607  355334 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002044 seconds
	I0108 21:26:25.291728  355334 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:26:25.291740  355334 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:26:25.312137  355334 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:26:25.312177  355334 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:26:25.850182  355334 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:26:25.850216  355334 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:26:25.850433  355334 kubeadm.go:322] [mark-control-plane] Marking the node multinode-962345 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:26:25.850449  355334 command_runner.go:130] > [mark-control-plane] Marking the node multinode-962345 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:26:26.366911  355334 kubeadm.go:322] [bootstrap-token] Using token: lhga4x.b52ymx7jky41gcoe
	I0108 21:26:26.368515  355334 out.go:204]   - Configuring RBAC rules ...
	I0108 21:26:26.367027  355334 command_runner.go:130] > [bootstrap-token] Using token: lhga4x.b52ymx7jky41gcoe
	I0108 21:26:26.368667  355334 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:26:26.368686  355334 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:26:26.374982  355334 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:26:26.375001  355334 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:26:26.386711  355334 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:26:26.386736  355334 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:26:26.391899  355334 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:26:26.391923  355334 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:26:26.398351  355334 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:26:26.398377  355334 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:26:26.404369  355334 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:26:26.404386  355334 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:26:26.418898  355334 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:26:26.418926  355334 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:26:26.670770  355334 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:26:26.670803  355334 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 21:26:26.784242  355334 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:26:26.784277  355334 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 21:26:26.784282  355334 kubeadm.go:322] 
	I0108 21:26:26.784354  355334 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:26:26.784364  355334 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 21:26:26.784368  355334 kubeadm.go:322] 
	I0108 21:26:26.784506  355334 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:26:26.784518  355334 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 21:26:26.784523  355334 kubeadm.go:322] 
	I0108 21:26:26.784559  355334 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:26:26.784570  355334 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 21:26:26.784643  355334 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:26:26.784653  355334 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:26:26.784720  355334 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:26:26.784728  355334 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:26:26.784733  355334 kubeadm.go:322] 
	I0108 21:26:26.784809  355334 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 21:26:26.784835  355334 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 21:26:26.784845  355334 kubeadm.go:322] 
	I0108 21:26:26.784970  355334 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:26:26.785001  355334 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:26:26.785008  355334 kubeadm.go:322] 
	I0108 21:26:26.785095  355334 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:26:26.785114  355334 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 21:26:26.785220  355334 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:26:26.785233  355334 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:26:26.785322  355334 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:26:26.785340  355334 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:26:26.785353  355334 kubeadm.go:322] 
	I0108 21:26:26.785473  355334 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:26:26.785486  355334 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:26:26.785582  355334 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:26:26.785611  355334 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 21:26:26.785621  355334 kubeadm.go:322] 
	I0108 21:26:26.785741  355334 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lhga4x.b52ymx7jky41gcoe \
	I0108 21:26:26.785757  355334 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token lhga4x.b52ymx7jky41gcoe \
	I0108 21:26:26.785894  355334 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 21:26:26.785904  355334 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 21:26:26.785929  355334 kubeadm.go:322] 	--control-plane 
	I0108 21:26:26.785938  355334 command_runner.go:130] > 	--control-plane 
	I0108 21:26:26.785943  355334 kubeadm.go:322] 
	I0108 21:26:26.786054  355334 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:26:26.786066  355334 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:26:26.786075  355334 kubeadm.go:322] 
	I0108 21:26:26.786178  355334 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lhga4x.b52ymx7jky41gcoe \
	I0108 21:26:26.786194  355334 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lhga4x.b52ymx7jky41gcoe \
	I0108 21:26:26.786334  355334 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 21:26:26.786339  355334 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 21:26:26.786535  355334 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:26:26.786550  355334 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:26:26.786576  355334 cni.go:84] Creating CNI manager for ""
	I0108 21:26:26.786594  355334 cni.go:136] 1 nodes found, recommending kindnet
	I0108 21:26:26.788469  355334 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:26:26.789992  355334 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:26:26.805413  355334 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:26:26.805441  355334 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:26:26.805448  355334 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:26:26.805454  355334 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:26:26.805462  355334 command_runner.go:130] > Access: 2024-01-08 21:25:56.566172033 +0000
	I0108 21:26:26.805470  355334 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0108 21:26:26.805477  355334 command_runner.go:130] > Change: 2024-01-08 21:25:54.724172033 +0000
	I0108 21:26:26.805485  355334 command_runner.go:130] >  Birth: -
	I0108 21:26:26.805688  355334 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:26:26.805712  355334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:26:26.846615  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:26:27.896300  355334 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 21:26:27.902120  355334 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 21:26:27.913214  355334 command_runner.go:130] > serviceaccount/kindnet created
	I0108 21:26:27.934686  355334 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 21:26:27.937723  355334 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.091063553s)
	I0108 21:26:27.937820  355334 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:26:27.937891  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:27.937891  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-962345 minikube.k8s.io/updated_at=2024_01_08T21_26_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:27.984587  355334 command_runner.go:130] > -16
	I0108 21:26:27.984635  355334 ops.go:34] apiserver oom_adj: -16
	I0108 21:26:28.159894  355334 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 21:26:28.160039  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:28.160055  355334 command_runner.go:130] > node/multinode-962345 labeled
	I0108 21:26:28.257078  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:28.660248  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:28.743722  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:29.160665  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:29.242783  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:29.660204  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:29.742188  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:30.160686  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:30.248293  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:30.660318  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:30.746216  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:31.160943  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:31.244540  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:31.660123  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:31.744504  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:32.160223  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:32.240502  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:32.660463  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:32.765389  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:33.160700  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:33.249896  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:33.660988  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:33.751978  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:34.160892  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:34.242855  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:34.660957  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:34.756204  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:35.160729  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:35.247454  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:35.660759  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:35.748419  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:36.161035  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:36.238945  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:36.661029  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:36.745098  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:37.160849  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:37.290077  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:37.660617  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:37.761127  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:38.160462  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:38.258422  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:38.660756  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:38.790367  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:39.160165  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:39.306993  355334 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:26:39.660601  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:26:39.747933  355334 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 21:26:39.748280  355334 command_runner.go:130] > default   0         0s
	I0108 21:26:39.749856  355334 kubeadm.go:1088] duration metric: took 11.812053233s to wait for elevateKubeSystemPrivileges.
	I0108 21:26:39.749888  355334 kubeadm.go:406] StartCluster complete in 26.169769913s
	I0108 21:26:39.749915  355334 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:39.750008  355334 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:26:39.750756  355334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:26:39.750990  355334 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:26:39.751025  355334 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:26:39.751149  355334 addons.go:69] Setting storage-provisioner=true in profile "multinode-962345"
	I0108 21:26:39.751173  355334 addons.go:237] Setting addon storage-provisioner=true in "multinode-962345"
	I0108 21:26:39.751174  355334 addons.go:69] Setting default-storageclass=true in profile "multinode-962345"
	I0108 21:26:39.751215  355334 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-962345"
	I0108 21:26:39.751248  355334 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:26:39.751251  355334 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:26:39.751308  355334 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:26:39.751728  355334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:26:39.751696  355334 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:26:39.751775  355334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:26:39.751776  355334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:26:39.751810  355334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:26:39.752596  355334 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:26:39.752961  355334 round_trippers.go:463] GET https://192.168.39.239:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:26:39.752979  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:39.752991  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:39.753001  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:39.763333  355334 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 21:26:39.763369  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:39.763380  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:39.763389  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:39.763396  355334 round_trippers.go:580]     Content-Length: 291
	I0108 21:26:39.763405  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:39 GMT
	I0108 21:26:39.763416  355334 round_trippers.go:580]     Audit-Id: 558d7513-f7c6-47b6-b1d2-ab38c7e9aa7a
	I0108 21:26:39.763428  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:39.763440  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:39.763490  355334 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9a0db73a-68c0-469b-b860-0baad5e41646","resourceVersion":"383","creationTimestamp":"2024-01-08T21:26:26Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:26:39.764084  355334 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9a0db73a-68c0-469b-b860-0baad5e41646","resourceVersion":"383","creationTimestamp":"2024-01-08T21:26:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:26:39.764166  355334 round_trippers.go:463] PUT https://192.168.39.239:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:26:39.764181  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:39.764193  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:39.764205  355334 round_trippers.go:473]     Content-Type: application/json
	I0108 21:26:39.764215  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:39.767770  355334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0108 21:26:39.767776  355334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I0108 21:26:39.768213  355334 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:26:39.768263  355334 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:26:39.768686  355334 main.go:141] libmachine: Using API Version  1
	I0108 21:26:39.768706  355334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:26:39.768822  355334 main.go:141] libmachine: Using API Version  1
	I0108 21:26:39.768844  355334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:26:39.769115  355334 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:26:39.769161  355334 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:26:39.769349  355334 main.go:141] libmachine: (multinode-962345) Calling .GetState
	I0108 21:26:39.769694  355334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:26:39.769742  355334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:26:39.772142  355334 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:26:39.772358  355334 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:26:39.772614  355334 addons.go:237] Setting addon default-storageclass=true in "multinode-962345"
	I0108 21:26:39.772645  355334 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:26:39.772914  355334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:26:39.772970  355334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:26:39.776161  355334 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0108 21:26:39.776184  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:39.776195  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:39.776215  355334 round_trippers.go:580]     Content-Length: 291
	I0108 21:26:39.776231  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:39 GMT
	I0108 21:26:39.776239  355334 round_trippers.go:580]     Audit-Id: 414fa8dc-0f52-4ca9-8add-61ac91de09ab
	I0108 21:26:39.776247  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:39.776259  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:39.776270  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:39.776299  355334 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9a0db73a-68c0-469b-b860-0baad5e41646","resourceVersion":"384","creationTimestamp":"2024-01-08T21:26:26Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:26:39.784921  355334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0108 21:26:39.785282  355334 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:26:39.785763  355334 main.go:141] libmachine: Using API Version  1
	I0108 21:26:39.785793  355334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:26:39.786089  355334 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:26:39.786221  355334 main.go:141] libmachine: (multinode-962345) Calling .GetState
	I0108 21:26:39.787873  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:39.789609  355334 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:26:39.788911  355334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0108 21:26:39.790883  355334 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:26:39.790901  355334 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:26:39.790920  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:39.791269  355334 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:26:39.791855  355334 main.go:141] libmachine: Using API Version  1
	I0108 21:26:39.791874  355334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:26:39.792223  355334 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:26:39.792804  355334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:26:39.792844  355334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:26:39.795476  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:39.795577  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:39.795615  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:39.797471  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:39.797497  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:39.797779  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:39.797958  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:26:39.808910  355334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41991
	I0108 21:26:39.809419  355334 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:26:39.809965  355334 main.go:141] libmachine: Using API Version  1
	I0108 21:26:39.809983  355334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:26:39.810311  355334 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:26:39.810482  355334 main.go:141] libmachine: (multinode-962345) Calling .GetState
	I0108 21:26:39.812001  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:26:39.812244  355334 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:26:39.812259  355334 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:26:39.812274  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:26:39.815323  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:39.815827  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:26:39.815863  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:26:39.816016  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:26:39.816206  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:26:39.816376  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:26:39.816556  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:26:39.946514  355334 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:26:39.952733  355334 command_runner.go:130] > apiVersion: v1
	I0108 21:26:39.952753  355334 command_runner.go:130] > data:
	I0108 21:26:39.952757  355334 command_runner.go:130] >   Corefile: |
	I0108 21:26:39.952763  355334 command_runner.go:130] >     .:53 {
	I0108 21:26:39.952769  355334 command_runner.go:130] >         errors
	I0108 21:26:39.952778  355334 command_runner.go:130] >         health {
	I0108 21:26:39.952787  355334 command_runner.go:130] >            lameduck 5s
	I0108 21:26:39.952793  355334 command_runner.go:130] >         }
	I0108 21:26:39.952798  355334 command_runner.go:130] >         ready
	I0108 21:26:39.952809  355334 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 21:26:39.952820  355334 command_runner.go:130] >            pods insecure
	I0108 21:26:39.952828  355334 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 21:26:39.952836  355334 command_runner.go:130] >            ttl 30
	I0108 21:26:39.952840  355334 command_runner.go:130] >         }
	I0108 21:26:39.952847  355334 command_runner.go:130] >         prometheus :9153
	I0108 21:26:39.952856  355334 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 21:26:39.952867  355334 command_runner.go:130] >            max_concurrent 1000
	I0108 21:26:39.952874  355334 command_runner.go:130] >         }
	I0108 21:26:39.952882  355334 command_runner.go:130] >         cache 30
	I0108 21:26:39.952889  355334 command_runner.go:130] >         loop
	I0108 21:26:39.952898  355334 command_runner.go:130] >         reload
	I0108 21:26:39.952906  355334 command_runner.go:130] >         loadbalance
	I0108 21:26:39.952915  355334 command_runner.go:130] >     }
	I0108 21:26:39.952921  355334 command_runner.go:130] > kind: ConfigMap
	I0108 21:26:39.952928  355334 command_runner.go:130] > metadata:
	I0108 21:26:39.952936  355334 command_runner.go:130] >   creationTimestamp: "2024-01-08T21:26:26Z"
	I0108 21:26:39.952944  355334 command_runner.go:130] >   name: coredns
	I0108 21:26:39.952955  355334 command_runner.go:130] >   namespace: kube-system
	I0108 21:26:39.952999  355334 command_runner.go:130] >   resourceVersion: "266"
	I0108 21:26:39.953016  355334 command_runner.go:130] >   uid: 40588f70-e960-47a7-b449-3780d271733d
	I0108 21:26:39.954271  355334 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:26:40.018120  355334 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:26:40.253646  355334 round_trippers.go:463] GET https://192.168.39.239:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:26:40.253672  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:40.253681  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:40.253687  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:40.256319  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:40.256342  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:40.256349  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:40 GMT
	I0108 21:26:40.256355  355334 round_trippers.go:580]     Audit-Id: 7ef7e165-17fc-487c-b2ca-1144807aaedb
	I0108 21:26:40.256360  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:40.256365  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:40.256370  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:40.256375  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:40.256380  355334 round_trippers.go:580]     Content-Length: 291
	I0108 21:26:40.256629  355334 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9a0db73a-68c0-469b-b860-0baad5e41646","resourceVersion":"394","creationTimestamp":"2024-01-08T21:26:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:26:40.256772  355334 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-962345" context rescaled to 1 replicas
	I0108 21:26:40.256814  355334 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:26:40.258708  355334 out.go:177] * Verifying Kubernetes components...
	I0108 21:26:40.260343  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:26:40.857396  355334 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 21:26:40.873946  355334 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 21:26:40.882152  355334 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 21:26:40.890953  355334 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 21:26:40.898943  355334 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 21:26:40.914327  355334 command_runner.go:130] > pod/storage-provisioner created
	I0108 21:26:40.916800  355334 command_runner.go:130] > configmap/coredns replaced
	I0108 21:26:40.916848  355334 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 21:26:40.916896  355334 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 21:26:40.916956  355334 main.go:141] libmachine: Making call to close driver server
	I0108 21:26:40.916976  355334 main.go:141] libmachine: (multinode-962345) Calling .Close
	I0108 21:26:40.917091  355334 main.go:141] libmachine: Making call to close driver server
	I0108 21:26:40.917111  355334 main.go:141] libmachine: (multinode-962345) Calling .Close
	I0108 21:26:40.917280  355334 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:26:40.917301  355334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:26:40.917307  355334 main.go:141] libmachine: (multinode-962345) DBG | Closing plugin on server side
	I0108 21:26:40.917314  355334 main.go:141] libmachine: Making call to close driver server
	I0108 21:26:40.917328  355334 main.go:141] libmachine: (multinode-962345) Calling .Close
	I0108 21:26:40.917532  355334 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:26:40.917533  355334 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:26:40.917546  355334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:26:40.917631  355334 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:26:40.917658  355334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:26:40.917669  355334 main.go:141] libmachine: Making call to close driver server
	I0108 21:26:40.917674  355334 round_trippers.go:463] GET https://192.168.39.239:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 21:26:40.917679  355334 main.go:141] libmachine: (multinode-962345) Calling .Close
	I0108 21:26:40.917683  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:40.917695  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:40.917710  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:40.917966  355334 main.go:141] libmachine: (multinode-962345) DBG | Closing plugin on server side
	I0108 21:26:40.918000  355334 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:26:40.917900  355334 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:26:40.918018  355334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:26:40.918276  355334 node_ready.go:35] waiting up to 6m0s for node "multinode-962345" to be "Ready" ...
	I0108 21:26:40.918393  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:40.918405  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:40.918415  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:40.918427  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:40.924180  355334 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:26:40.924207  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:40.924214  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:40.924219  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:40.924225  355334 round_trippers.go:580]     Content-Length: 1273
	I0108 21:26:40.924230  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:40 GMT
	I0108 21:26:40.924235  355334 round_trippers.go:580]     Audit-Id: 09951ff8-5d02-48e1-92bb-9256edc4f451
	I0108 21:26:40.924240  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:40.924245  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:40.924894  355334 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"60b65736-f8c7-45f3-bf19-e32377207e46","resourceVersion":"395","creationTimestamp":"2024-01-08T21:26:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:26:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 21:26:40.925320  355334 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"60b65736-f8c7-45f3-bf19-e32377207e46","resourceVersion":"395","creationTimestamp":"2024-01-08T21:26:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:26:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 21:26:40.925365  355334 round_trippers.go:463] PUT https://192.168.39.239:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 21:26:40.925377  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:40.925388  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:40.925399  355334 round_trippers.go:473]     Content-Type: application/json
	I0108 21:26:40.925407  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:40.930573  355334 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0108 21:26:40.930590  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:40.930597  355334 round_trippers.go:580]     Audit-Id: cb3d8702-b185-4ac9-a489-015148ae95fb
	I0108 21:26:40.930603  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:40.930610  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:40.930619  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:40.930628  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:40.930640  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:40 GMT
	I0108 21:26:40.930856  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"358","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 21:26:40.932876  355334 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:26:40.932895  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:40.932905  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:40 GMT
	I0108 21:26:40.932919  355334 round_trippers.go:580]     Audit-Id: 8de932a1-1c53-45c3-9dcd-4a9e86bf0bca
	I0108 21:26:40.932924  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:40.932930  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:40.932937  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:40.932946  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:40.932958  355334 round_trippers.go:580]     Content-Length: 1220
	I0108 21:26:40.932986  355334 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"60b65736-f8c7-45f3-bf19-e32377207e46","resourceVersion":"395","creationTimestamp":"2024-01-08T21:26:40Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:26:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 21:26:40.933127  355334 main.go:141] libmachine: Making call to close driver server
	I0108 21:26:40.933141  355334 main.go:141] libmachine: (multinode-962345) Calling .Close
	I0108 21:26:40.933424  355334 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:26:40.933466  355334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:26:40.933441  355334 main.go:141] libmachine: (multinode-962345) DBG | Closing plugin on server side
	I0108 21:26:40.935506  355334 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:26:40.937166  355334 addons.go:508] enable addons completed in 1.186148491s: enabled=[storage-provisioner default-storageclass]
	I0108 21:26:41.418597  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:41.418620  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:41.418629  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:41.418635  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:41.421349  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:41.421376  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:41.421383  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:41.421388  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:41.421393  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:41 GMT
	I0108 21:26:41.421398  355334 round_trippers.go:580]     Audit-Id: c33ad441-ab9a-43f9-bf01-3a091295f370
	I0108 21:26:41.421404  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:41.421412  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:41.421575  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"358","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 21:26:41.919338  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:41.919394  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:41.919403  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:41.919409  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:41.922583  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:41.922611  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:41.922621  355334 round_trippers.go:580]     Audit-Id: e46f3471-36f1-4dbf-855d-cdba76f6a076
	I0108 21:26:41.922630  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:41.922637  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:41.922646  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:41.922652  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:41.922658  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:41 GMT
	I0108 21:26:41.923250  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"358","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 21:26:42.418927  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:42.418954  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:42.418962  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:42.418968  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:42.424680  355334 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:26:42.424706  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:42.424714  355334 round_trippers.go:580]     Audit-Id: b134516d-fd6f-4ef0-a851-478f735e1a9f
	I0108 21:26:42.424719  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:42.424725  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:42.424730  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:42.424735  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:42.424740  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:42 GMT
	I0108 21:26:42.425592  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"358","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 21:26:42.919377  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:42.919404  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:42.919412  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:42.919419  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:42.922319  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:42.922343  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:42.922353  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:42.922361  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:42.922369  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:42.922378  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:42.922388  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:42 GMT
	I0108 21:26:42.922397  355334 round_trippers.go:580]     Audit-Id: 59d48dc5-3eaa-42af-82b0-5ac135200393
	I0108 21:26:42.922550  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"358","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 21:26:42.922884  355334 node_ready.go:58] node "multinode-962345" has status "Ready":"False"
	I0108 21:26:43.419242  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:43.419267  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:43.419283  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:43.419291  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:43.422983  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:43.423003  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:43.423009  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:43.423014  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:43.423020  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:43.423025  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:43 GMT
	I0108 21:26:43.423030  355334 round_trippers.go:580]     Audit-Id: 89a52146-8ef3-4b0d-a9b9-0932b99c3976
	I0108 21:26:43.423037  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:43.423851  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"358","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 21:26:43.918524  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:43.918552  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:43.918561  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:43.918567  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:43.921602  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:43.921629  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:43.921640  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:43.921647  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:43 GMT
	I0108 21:26:43.921653  355334 round_trippers.go:580]     Audit-Id: 3d66e7d6-1dbf-4705-81f9-762cac691b96
	I0108 21:26:43.921658  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:43.921664  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:43.921670  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:43.921814  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"358","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 21:26:44.419575  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:44.419612  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:44.419623  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:44.419632  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:44.423246  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:44.423269  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:44.423279  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:44.423287  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:44.423310  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:44.423323  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:44 GMT
	I0108 21:26:44.423333  355334 round_trippers.go:580]     Audit-Id: e661f8d6-4df8-442e-b102-16e5958236a5
	I0108 21:26:44.423339  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:44.423639  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"358","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 21:26:44.919003  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:44.919028  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:44.919037  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:44.919043  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:44.927658  355334 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:26:44.927679  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:44.927685  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:44.927691  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:44 GMT
	I0108 21:26:44.927708  355334 round_trippers.go:580]     Audit-Id: 1a85c80e-d54e-44ca-bb39-6d8200c58f24
	I0108 21:26:44.927717  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:44.927725  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:44.927733  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:44.928049  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:44.928369  355334 node_ready.go:49] node "multinode-962345" has status "Ready":"True"
	I0108 21:26:44.928386  355334 node_ready.go:38] duration metric: took 4.01007194s waiting for node "multinode-962345" to be "Ready" ...
	I0108 21:26:44.928396  355334 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:26:44.928481  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:26:44.928490  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:44.928497  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:44.928503  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:44.938484  355334 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0108 21:26:44.938507  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:44.938517  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:44.938526  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:44.938534  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:44.938541  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:44 GMT
	I0108 21:26:44.938548  355334 round_trippers.go:580]     Audit-Id: d88ddbc2-1de3-426d-ac93-2720b58aad9a
	I0108 21:26:44.938556  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:44.939489  355334 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"420","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50955 chars]
	I0108 21:26:44.942447  355334 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:44.942538  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:26:44.942549  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:44.942560  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:44.942570  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:44.960942  355334 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0108 21:26:44.960968  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:44.960974  355334 round_trippers.go:580]     Audit-Id: a64f9e4e-54bf-47bc-8504-a1863172e44a
	I0108 21:26:44.960980  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:44.960985  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:44.960990  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:44.960995  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:44.961000  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:44 GMT
	I0108 21:26:44.961740  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"423","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 21:26:44.962195  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:44.962210  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:44.962218  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:44.962223  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:44.966242  355334 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:44.966260  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:44.966267  355334 round_trippers.go:580]     Audit-Id: ac315b27-98c5-4a18-97a7-e9a6bdd8c57b
	I0108 21:26:44.966274  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:44.966280  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:44.966285  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:44.966290  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:44.966296  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:44 GMT
	I0108 21:26:44.966468  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:45.443304  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:26:45.443331  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:45.443339  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:45.443345  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:45.447458  355334 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:45.447488  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:45.447499  355334 round_trippers.go:580]     Audit-Id: c08034db-fb1d-42ba-9cfd-b1bbad02831c
	I0108 21:26:45.447511  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:45.447519  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:45.447527  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:45.447535  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:45.447543  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:45 GMT
	I0108 21:26:45.447771  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"423","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 21:26:45.448235  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:45.448248  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:45.448255  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:45.448261  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:45.453237  355334 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:26:45.453260  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:45.453271  355334 round_trippers.go:580]     Audit-Id: 9f309fbb-f83d-46fb-afe6-51a569e52830
	I0108 21:26:45.453279  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:45.453288  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:45.453296  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:45.453305  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:45.453313  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:45 GMT
	I0108 21:26:45.453490  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:45.943178  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:26:45.943210  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:45.943223  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:45.943233  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:45.946413  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:45.946439  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:45.946447  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:45.946452  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:45.946457  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:45.946462  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:45.946467  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:45 GMT
	I0108 21:26:45.946473  355334 round_trippers.go:580]     Audit-Id: 3b161ab3-68ac-4439-b830-43077c40c572
	I0108 21:26:45.946601  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"423","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 21:26:45.947059  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:45.947070  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:45.947077  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:45.947083  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:45.954757  355334 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:26:45.954783  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:45.954790  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:45.954796  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:45 GMT
	I0108 21:26:45.954801  355334 round_trippers.go:580]     Audit-Id: b624c261-22e6-4b8c-a744-3e389b74008e
	I0108 21:26:45.954806  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:45.954811  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:45.954817  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:45.956195  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:46.442853  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:26:46.442886  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:46.442900  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:46.442908  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:46.445932  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:46.445961  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:46.445972  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:46 GMT
	I0108 21:26:46.445980  355334 round_trippers.go:580]     Audit-Id: 0dd63fb9-f1f7-4510-b030-cc24da131e37
	I0108 21:26:46.445986  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:46.445993  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:46.446000  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:46.446007  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:46.446287  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"423","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 21:26:46.446893  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:46.446913  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:46.446925  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:46.446935  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:46.449248  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:46.449272  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:46.449279  355334 round_trippers.go:580]     Audit-Id: e4423942-6e4b-49e7-8fb2-808347a78e0c
	I0108 21:26:46.449284  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:46.449290  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:46.449295  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:46.449300  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:46.449305  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:46 GMT
	I0108 21:26:46.449632  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:46.943430  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:26:46.943460  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:46.943469  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:46.943478  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:46.946885  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:46.946914  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:46.946924  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:46.946932  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:46.946939  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:46.946947  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:46 GMT
	I0108 21:26:46.946954  355334 round_trippers.go:580]     Audit-Id: a7bc706b-2a0b-43b6-b964-3e7a78d68cc1
	I0108 21:26:46.946962  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:46.947535  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"423","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 21:26:46.947991  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:46.948003  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:46.948011  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:46.948017  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:46.950463  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:46.950476  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:46.950482  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:46.950487  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:46 GMT
	I0108 21:26:46.950492  355334 round_trippers.go:580]     Audit-Id: ff403a37-3e8c-4358-801d-4e474d5b8df9
	I0108 21:26:46.950497  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:46.950503  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:46.950511  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:46.951088  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:46.951454  355334 pod_ready.go:102] pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace has status "Ready":"False"
	I0108 21:26:47.442759  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:26:47.442791  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.442807  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.442816  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.445744  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:47.445769  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.445776  355334 round_trippers.go:580]     Audit-Id: 86acb63c-6ec2-48a7-8d14-a931aa79ace2
	I0108 21:26:47.445781  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.445786  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.445793  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.445803  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.445813  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.445999  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"439","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 21:26:47.446485  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:47.446500  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.446511  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.446520  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.449562  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:47.449582  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.449593  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.449602  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.449608  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.449616  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.449621  355334 round_trippers.go:580]     Audit-Id: 81593975-1117-4101-ac1e-e896a7ae518f
	I0108 21:26:47.449627  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.449939  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:47.450248  355334 pod_ready.go:92] pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:47.450265  355334 pod_ready.go:81] duration metric: took 2.507795052s waiting for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.450278  355334 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.450332  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-962345
	I0108 21:26:47.450340  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.450374  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.450386  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.452242  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:47.452264  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.452275  355334 round_trippers.go:580]     Audit-Id: a9dd3684-9b09-40fa-ab30-82e06c4f3f2b
	I0108 21:26:47.452281  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.452286  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.452291  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.452297  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.452302  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.452395  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-962345","namespace":"kube-system","uid":"44773ce7-5393-4178-a985-d8bf216f88f1","resourceVersion":"325","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.239:2379","kubernetes.io/config.hash":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.mirror":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.seen":"2024-01-08T21:26:26.755438257Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 21:26:47.452816  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:47.452832  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.452839  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.452847  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.454805  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:47.454824  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.454832  355334 round_trippers.go:580]     Audit-Id: e32e7f84-dc6d-4040-871e-a07310a3f316
	I0108 21:26:47.454840  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.454848  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.454860  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.454871  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.454883  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.455069  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:47.455382  355334 pod_ready.go:92] pod "etcd-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:47.455398  355334 pod_ready.go:81] duration metric: took 5.111654ms waiting for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.455414  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.455477  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-962345
	I0108 21:26:47.455486  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.455496  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.455509  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.457217  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:47.457231  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.457239  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.457247  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.457256  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.457268  355334 round_trippers.go:580]     Audit-Id: 52c7ba31-5c9e-43c3-915d-7dcf72a2e582
	I0108 21:26:47.457279  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.457295  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.457405  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-962345","namespace":"kube-system","uid":"bea03251-08df-4434-bc4a-36ef454e151e","resourceVersion":"331","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.239:8443","kubernetes.io/config.hash":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.mirror":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.seen":"2024-01-08T21:26:26.755439577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 21:26:47.457764  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:47.457777  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.457790  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.457799  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.459461  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:47.459482  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.459491  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.459499  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.459509  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.459522  355334 round_trippers.go:580]     Audit-Id: 1ac45ae7-3f8e-443d-b481-939143ba8088
	I0108 21:26:47.459533  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.459542  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.459839  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:47.460100  355334 pod_ready.go:92] pod "kube-apiserver-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:47.460115  355334 pod_ready.go:81] duration metric: took 4.690379ms waiting for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.460127  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.460172  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-962345
	I0108 21:26:47.460182  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.460191  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.460202  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.462117  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:47.462129  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.462138  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.462147  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.462155  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.462170  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.462179  355334 round_trippers.go:580]     Audit-Id: c4bab120-1f3d-4ae5-b86f-5a0816a32f54
	I0108 21:26:47.462191  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.462413  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-962345","namespace":"kube-system","uid":"80b86d62-83f0-4550-988f-6255409d39da","resourceVersion":"308","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.mirror":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.seen":"2024-01-08T21:26:26.755427365Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 21:26:47.462757  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:47.462770  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.462779  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.462787  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.464914  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:47.464935  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.464944  355334 round_trippers.go:580]     Audit-Id: 36843cfa-a229-4dff-a569-95ad051ffea7
	I0108 21:26:47.464952  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.464960  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.464969  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.464978  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.464993  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.465094  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:47.465360  355334 pod_ready.go:92] pod "kube-controller-manager-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:47.465377  355334 pod_ready.go:81] duration metric: took 5.242068ms waiting for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.465390  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.465441  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:26:47.465451  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.465461  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.465471  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.467176  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:47.467195  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.467205  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.467214  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.467223  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.467233  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.467240  355334 round_trippers.go:580]     Audit-Id: 3603120c-bb50-4eef-9d51-7c881b24fe7b
	I0108 21:26:47.467252  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.467476  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmjzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"fbfa39a4-ba62-4e31-8126-9a320311e846","resourceVersion":"409","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 21:26:47.467897  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:47.467912  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.467919  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.467925  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.469573  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:47.469586  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.469591  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.469597  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.469603  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.469610  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.469620  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.469629  355334 round_trippers.go:580]     Audit-Id: 5d40c0b8-22a9-44dc-9cd5-fbf57cc71092
	I0108 21:26:47.470038  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:47.470371  355334 pod_ready.go:92] pod "kube-proxy-bmjzs" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:47.470387  355334 pod_ready.go:81] duration metric: took 4.990461ms waiting for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.470397  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.642868  355334 request.go:629] Waited for 172.315264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:26:47.642950  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:26:47.642960  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.642967  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.642974  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.646380  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:47.646406  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.646418  355334 round_trippers.go:580]     Audit-Id: 7bc330eb-7f68-4949-b6d9-6bcacc5b379f
	I0108 21:26:47.646427  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.646440  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.646446  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.646468  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.646523  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.646785  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-962345","namespace":"kube-system","uid":"3778c0a4-1528-4336-9f02-b77a2a6a1c34","resourceVersion":"306","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.mirror":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.seen":"2024-01-08T21:26:26.755431609Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 21:26:47.843729  355334 request.go:629] Waited for 196.419459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:47.843804  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:26:47.843809  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.843817  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.843829  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.847113  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:47.847140  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.847150  355334 round_trippers.go:580]     Audit-Id: 0a46a674-e12d-40e2-a8ec-edc4de57c17d
	I0108 21:26:47.847160  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.847174  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.847181  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.847189  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.847195  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.847349  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:26:47.847781  355334 pod_ready.go:92] pod "kube-scheduler-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:26:47.847802  355334 pod_ready.go:81] duration metric: took 377.392782ms waiting for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:26:47.847821  355334 pod_ready.go:38] duration metric: took 2.919391889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:26:47.847845  355334 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:26:47.847905  355334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:26:47.860813  355334 command_runner.go:130] > 1062
	I0108 21:26:47.860956  355334 api_server.go:72] duration metric: took 7.604099377s to wait for apiserver process to appear ...
	I0108 21:26:47.860974  355334 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:26:47.860996  355334 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0108 21:26:47.866623  355334 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0108 21:26:47.866708  355334 round_trippers.go:463] GET https://192.168.39.239:8443/version
	I0108 21:26:47.866718  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:47.866725  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:47.866732  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:47.868100  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:26:47.868118  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:47.868127  355334 round_trippers.go:580]     Content-Length: 264
	I0108 21:26:47.868141  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:47 GMT
	I0108 21:26:47.868149  355334 round_trippers.go:580]     Audit-Id: cece1a40-5dae-4247-a7db-00e13d43f735
	I0108 21:26:47.868154  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:47.868160  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:47.868165  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:47.868170  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:47.868186  355334 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 21:26:47.868314  355334 api_server.go:141] control plane version: v1.28.4
	I0108 21:26:47.868340  355334 api_server.go:131] duration metric: took 7.358589ms to wait for apiserver health ...
	I0108 21:26:47.868349  355334 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:26:48.043537  355334 request.go:629] Waited for 175.114039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:26:48.043617  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:26:48.043625  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:48.043636  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:48.043646  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:48.051500  355334 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:26:48.051526  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:48.051533  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:48.051539  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:48.051549  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:48.051554  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:48.051560  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:48 GMT
	I0108 21:26:48.051569  355334 round_trippers.go:580]     Audit-Id: ff994dfe-3df8-403e-8632-9610839c849a
	I0108 21:26:48.053529  355334 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"439","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0108 21:26:48.055191  355334 system_pods.go:59] 8 kube-system pods found
	I0108 21:26:48.055237  355334 system_pods.go:61] "coredns-5dd5756b68-v6dmd" [9c1edff2-3b29-4045-b7b9-935c47115d16] Running
	I0108 21:26:48.055261  355334 system_pods.go:61] "etcd-multinode-962345" [44773ce7-5393-4178-a985-d8bf216f88f1] Running
	I0108 21:26:48.055267  355334 system_pods.go:61] "kindnet-5w9nh" [b84fc0ee-c9b1-4e6c-b066-536f2fd56d52] Running
	I0108 21:26:48.055275  355334 system_pods.go:61] "kube-apiserver-multinode-962345" [bea03251-08df-4434-bc4a-36ef454e151e] Running
	I0108 21:26:48.055280  355334 system_pods.go:61] "kube-controller-manager-multinode-962345" [80b86d62-83f0-4550-988f-6255409d39da] Running
	I0108 21:26:48.055287  355334 system_pods.go:61] "kube-proxy-bmjzs" [fbfa39a4-ba62-4e31-8126-9a320311e846] Running
	I0108 21:26:48.055291  355334 system_pods.go:61] "kube-scheduler-multinode-962345" [3778c0a4-1528-4336-9f02-b77a2a6a1c34] Running
	I0108 21:26:48.055298  355334 system_pods.go:61] "storage-provisioner" [da89492c-e129-462d-b84e-2f4a10085550] Running
	I0108 21:26:48.055306  355334 system_pods.go:74] duration metric: took 186.949529ms to wait for pod list to return data ...
	I0108 21:26:48.055316  355334 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:26:48.242925  355334 request.go:629] Waited for 187.505229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:26:48.242996  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:26:48.243003  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:48.243012  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:48.243024  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:48.245963  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:48.245980  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:48.245986  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:48 GMT
	I0108 21:26:48.245992  355334 round_trippers.go:580]     Audit-Id: a1d1d68a-30cc-4114-9d14-1696ebcf3693
	I0108 21:26:48.245997  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:48.246002  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:48.246008  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:48.246016  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:48.246025  355334 round_trippers.go:580]     Content-Length: 261
	I0108 21:26:48.246051  355334 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"15bd0783-c8a5-4e50-84fc-9a8ed6232cdb","resourceVersion":"369","creationTimestamp":"2024-01-08T21:26:39Z"}}]}
	I0108 21:26:48.246251  355334 default_sa.go:45] found service account: "default"
	I0108 21:26:48.246272  355334 default_sa.go:55] duration metric: took 190.94714ms for default service account to be created ...
	I0108 21:26:48.246281  355334 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:26:48.442817  355334 request.go:629] Waited for 196.437621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:26:48.442895  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:26:48.442900  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:48.442910  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:48.442920  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:48.446804  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:26:48.446827  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:48.446835  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:48.446843  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:48.446851  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:48.446860  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:48.446869  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:48 GMT
	I0108 21:26:48.446878  355334 round_trippers.go:580]     Audit-Id: 5ddc6756-6e33-4cdd-9baf-95b507c8e7c2
	I0108 21:26:48.448342  355334 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"439","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0108 21:26:48.450004  355334 system_pods.go:86] 8 kube-system pods found
	I0108 21:26:48.450027  355334 system_pods.go:89] "coredns-5dd5756b68-v6dmd" [9c1edff2-3b29-4045-b7b9-935c47115d16] Running
	I0108 21:26:48.450031  355334 system_pods.go:89] "etcd-multinode-962345" [44773ce7-5393-4178-a985-d8bf216f88f1] Running
	I0108 21:26:48.450035  355334 system_pods.go:89] "kindnet-5w9nh" [b84fc0ee-c9b1-4e6c-b066-536f2fd56d52] Running
	I0108 21:26:48.450039  355334 system_pods.go:89] "kube-apiserver-multinode-962345" [bea03251-08df-4434-bc4a-36ef454e151e] Running
	I0108 21:26:48.450046  355334 system_pods.go:89] "kube-controller-manager-multinode-962345" [80b86d62-83f0-4550-988f-6255409d39da] Running
	I0108 21:26:48.450052  355334 system_pods.go:89] "kube-proxy-bmjzs" [fbfa39a4-ba62-4e31-8126-9a320311e846] Running
	I0108 21:26:48.450059  355334 system_pods.go:89] "kube-scheduler-multinode-962345" [3778c0a4-1528-4336-9f02-b77a2a6a1c34] Running
	I0108 21:26:48.450070  355334 system_pods.go:89] "storage-provisioner" [da89492c-e129-462d-b84e-2f4a10085550] Running
	I0108 21:26:48.450085  355334 system_pods.go:126] duration metric: took 203.796027ms to wait for k8s-apps to be running ...
	I0108 21:26:48.450094  355334 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:26:48.450142  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:26:48.465573  355334 system_svc.go:56] duration metric: took 15.467273ms WaitForService to wait for kubelet.
	I0108 21:26:48.465603  355334 kubeadm.go:581] duration metric: took 8.208750346s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:26:48.465622  355334 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:26:48.642823  355334 request.go:629] Waited for 177.109494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes
	I0108 21:26:48.642892  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes
	I0108 21:26:48.642897  355334 round_trippers.go:469] Request Headers:
	I0108 21:26:48.642905  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:26:48.642911  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:26:48.645869  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:26:48.645895  355334 round_trippers.go:577] Response Headers:
	I0108 21:26:48.645904  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:26:48.645911  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:26:48 GMT
	I0108 21:26:48.645918  355334 round_trippers.go:580]     Audit-Id: 5cbdb58d-0a81-4b49-9e4f-4cc7e0575cd2
	I0108 21:26:48.645926  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:26:48.645935  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:26:48.645942  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:26:48.646182  355334 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0108 21:26:48.646551  355334 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:26:48.646573  355334 node_conditions.go:123] node cpu capacity is 2
	I0108 21:26:48.646584  355334 node_conditions.go:105] duration metric: took 180.957301ms to run NodePressure ...
	I0108 21:26:48.646597  355334 start.go:228] waiting for startup goroutines ...
	I0108 21:26:48.646607  355334 start.go:233] waiting for cluster config update ...
	I0108 21:26:48.646616  355334 start.go:242] writing updated cluster config ...
	I0108 21:26:48.648731  355334 out.go:177] 
	I0108 21:26:48.650290  355334 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:26:48.650364  355334 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:26:48.652107  355334 out.go:177] * Starting worker node multinode-962345-m02 in cluster multinode-962345
	I0108 21:26:48.653410  355334 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:26:48.653428  355334 cache.go:56] Caching tarball of preloaded images
	I0108 21:26:48.653558  355334 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:26:48.653571  355334 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:26:48.653630  355334 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:26:48.653775  355334 start.go:365] acquiring machines lock for multinode-962345-m02: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:26:48.653814  355334 start.go:369] acquired machines lock for "multinode-962345-m02" in 20.393µs
	I0108 21:26:48.653831  355334 start.go:93] Provisioning new machine with config: &{Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:26:48.653886  355334 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0108 21:26:48.655650  355334 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 21:26:48.655775  355334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:26:48.655806  355334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:26:48.670270  355334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35099
	I0108 21:26:48.670663  355334 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:26:48.671110  355334 main.go:141] libmachine: Using API Version  1
	I0108 21:26:48.671136  355334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:26:48.671456  355334 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:26:48.671639  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetMachineName
	I0108 21:26:48.671779  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:26:48.671951  355334 start.go:159] libmachine.API.Create for "multinode-962345" (driver="kvm2")
	I0108 21:26:48.671987  355334 client.go:168] LocalClient.Create starting
	I0108 21:26:48.672022  355334 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 21:26:48.672060  355334 main.go:141] libmachine: Decoding PEM data...
	I0108 21:26:48.672084  355334 main.go:141] libmachine: Parsing certificate...
	I0108 21:26:48.672154  355334 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 21:26:48.672189  355334 main.go:141] libmachine: Decoding PEM data...
	I0108 21:26:48.672208  355334 main.go:141] libmachine: Parsing certificate...
	I0108 21:26:48.672236  355334 main.go:141] libmachine: Running pre-create checks...
	I0108 21:26:48.672250  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .PreCreateCheck
	I0108 21:26:48.672410  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetConfigRaw
	I0108 21:26:48.672796  355334 main.go:141] libmachine: Creating machine...
	I0108 21:26:48.672813  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .Create
	I0108 21:26:48.672947  355334 main.go:141] libmachine: (multinode-962345-m02) Creating KVM machine...
	I0108 21:26:48.674046  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found existing default KVM network
	I0108 21:26:48.674195  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found existing private KVM network mk-multinode-962345
	I0108 21:26:48.674287  355334 main.go:141] libmachine: (multinode-962345-m02) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02 ...
	I0108 21:26:48.674309  355334 main.go:141] libmachine: (multinode-962345-m02) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 21:26:48.674427  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:48.674310  355698 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:26:48.674519  355334 main.go:141] libmachine: (multinode-962345-m02) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 21:26:48.906177  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:48.906027  355698 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa...
	I0108 21:26:49.095187  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:49.095021  355698 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/multinode-962345-m02.rawdisk...
	I0108 21:26:49.095227  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Writing magic tar header
	I0108 21:26:49.095249  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Writing SSH key tar header
	I0108 21:26:49.095392  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:49.095255  355698 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02 ...
	I0108 21:26:49.095416  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02
	I0108 21:26:49.095432  355334 main.go:141] libmachine: (multinode-962345-m02) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02 (perms=drwx------)
	I0108 21:26:49.095448  355334 main.go:141] libmachine: (multinode-962345-m02) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 21:26:49.095457  355334 main.go:141] libmachine: (multinode-962345-m02) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 21:26:49.095467  355334 main.go:141] libmachine: (multinode-962345-m02) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 21:26:49.095476  355334 main.go:141] libmachine: (multinode-962345-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 21:26:49.095486  355334 main.go:141] libmachine: (multinode-962345-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 21:26:49.095494  355334 main.go:141] libmachine: (multinode-962345-m02) Creating domain...
	I0108 21:26:49.095502  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 21:26:49.095513  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:26:49.095528  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 21:26:49.095536  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 21:26:49.095544  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Checking permissions on dir: /home/jenkins
	I0108 21:26:49.095551  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Checking permissions on dir: /home
	I0108 21:26:49.095559  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Skipping /home - not owner
	I0108 21:26:49.096537  355334 main.go:141] libmachine: (multinode-962345-m02) define libvirt domain using xml: 
	I0108 21:26:49.096563  355334 main.go:141] libmachine: (multinode-962345-m02) <domain type='kvm'>
	I0108 21:26:49.096574  355334 main.go:141] libmachine: (multinode-962345-m02)   <name>multinode-962345-m02</name>
	I0108 21:26:49.096585  355334 main.go:141] libmachine: (multinode-962345-m02)   <memory unit='MiB'>2200</memory>
	I0108 21:26:49.096612  355334 main.go:141] libmachine: (multinode-962345-m02)   <vcpu>2</vcpu>
	I0108 21:26:49.096628  355334 main.go:141] libmachine: (multinode-962345-m02)   <features>
	I0108 21:26:49.096636  355334 main.go:141] libmachine: (multinode-962345-m02)     <acpi/>
	I0108 21:26:49.096648  355334 main.go:141] libmachine: (multinode-962345-m02)     <apic/>
	I0108 21:26:49.096656  355334 main.go:141] libmachine: (multinode-962345-m02)     <pae/>
	I0108 21:26:49.096662  355334 main.go:141] libmachine: (multinode-962345-m02)     
	I0108 21:26:49.096670  355334 main.go:141] libmachine: (multinode-962345-m02)   </features>
	I0108 21:26:49.096676  355334 main.go:141] libmachine: (multinode-962345-m02)   <cpu mode='host-passthrough'>
	I0108 21:26:49.096684  355334 main.go:141] libmachine: (multinode-962345-m02)   
	I0108 21:26:49.096692  355334 main.go:141] libmachine: (multinode-962345-m02)   </cpu>
	I0108 21:26:49.096698  355334 main.go:141] libmachine: (multinode-962345-m02)   <os>
	I0108 21:26:49.096706  355334 main.go:141] libmachine: (multinode-962345-m02)     <type>hvm</type>
	I0108 21:26:49.096712  355334 main.go:141] libmachine: (multinode-962345-m02)     <boot dev='cdrom'/>
	I0108 21:26:49.096720  355334 main.go:141] libmachine: (multinode-962345-m02)     <boot dev='hd'/>
	I0108 21:26:49.096726  355334 main.go:141] libmachine: (multinode-962345-m02)     <bootmenu enable='no'/>
	I0108 21:26:49.096734  355334 main.go:141] libmachine: (multinode-962345-m02)   </os>
	I0108 21:26:49.096769  355334 main.go:141] libmachine: (multinode-962345-m02)   <devices>
	I0108 21:26:49.096796  355334 main.go:141] libmachine: (multinode-962345-m02)     <disk type='file' device='cdrom'>
	I0108 21:26:49.096817  355334 main.go:141] libmachine: (multinode-962345-m02)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/boot2docker.iso'/>
	I0108 21:26:49.096831  355334 main.go:141] libmachine: (multinode-962345-m02)       <target dev='hdc' bus='scsi'/>
	I0108 21:26:49.096847  355334 main.go:141] libmachine: (multinode-962345-m02)       <readonly/>
	I0108 21:26:49.096859  355334 main.go:141] libmachine: (multinode-962345-m02)     </disk>
	I0108 21:26:49.096874  355334 main.go:141] libmachine: (multinode-962345-m02)     <disk type='file' device='disk'>
	I0108 21:26:49.096892  355334 main.go:141] libmachine: (multinode-962345-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 21:26:49.096915  355334 main.go:141] libmachine: (multinode-962345-m02)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/multinode-962345-m02.rawdisk'/>
	I0108 21:26:49.096930  355334 main.go:141] libmachine: (multinode-962345-m02)       <target dev='hda' bus='virtio'/>
	I0108 21:26:49.096943  355334 main.go:141] libmachine: (multinode-962345-m02)     </disk>
	I0108 21:26:49.096968  355334 main.go:141] libmachine: (multinode-962345-m02)     <interface type='network'>
	I0108 21:26:49.096986  355334 main.go:141] libmachine: (multinode-962345-m02)       <source network='mk-multinode-962345'/>
	I0108 21:26:49.097022  355334 main.go:141] libmachine: (multinode-962345-m02)       <model type='virtio'/>
	I0108 21:26:49.097045  355334 main.go:141] libmachine: (multinode-962345-m02)     </interface>
	I0108 21:26:49.097070  355334 main.go:141] libmachine: (multinode-962345-m02)     <interface type='network'>
	I0108 21:26:49.097093  355334 main.go:141] libmachine: (multinode-962345-m02)       <source network='default'/>
	I0108 21:26:49.097108  355334 main.go:141] libmachine: (multinode-962345-m02)       <model type='virtio'/>
	I0108 21:26:49.097120  355334 main.go:141] libmachine: (multinode-962345-m02)     </interface>
	I0108 21:26:49.097134  355334 main.go:141] libmachine: (multinode-962345-m02)     <serial type='pty'>
	I0108 21:26:49.097146  355334 main.go:141] libmachine: (multinode-962345-m02)       <target port='0'/>
	I0108 21:26:49.097157  355334 main.go:141] libmachine: (multinode-962345-m02)     </serial>
	I0108 21:26:49.097164  355334 main.go:141] libmachine: (multinode-962345-m02)     <console type='pty'>
	I0108 21:26:49.097181  355334 main.go:141] libmachine: (multinode-962345-m02)       <target type='serial' port='0'/>
	I0108 21:26:49.097200  355334 main.go:141] libmachine: (multinode-962345-m02)     </console>
	I0108 21:26:49.097215  355334 main.go:141] libmachine: (multinode-962345-m02)     <rng model='virtio'>
	I0108 21:26:49.097230  355334 main.go:141] libmachine: (multinode-962345-m02)       <backend model='random'>/dev/random</backend>
	I0108 21:26:49.097243  355334 main.go:141] libmachine: (multinode-962345-m02)     </rng>
	I0108 21:26:49.097264  355334 main.go:141] libmachine: (multinode-962345-m02)     
	I0108 21:26:49.097277  355334 main.go:141] libmachine: (multinode-962345-m02)     
	I0108 21:26:49.097284  355334 main.go:141] libmachine: (multinode-962345-m02)   </devices>
	I0108 21:26:49.097292  355334 main.go:141] libmachine: (multinode-962345-m02) </domain>
	I0108 21:26:49.097303  355334 main.go:141] libmachine: (multinode-962345-m02) 
	I0108 21:26:49.103809  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:ef:70:3d in network default
	I0108 21:26:49.104364  355334 main.go:141] libmachine: (multinode-962345-m02) Ensuring networks are active...
	I0108 21:26:49.104393  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:49.105000  355334 main.go:141] libmachine: (multinode-962345-m02) Ensuring network default is active
	I0108 21:26:49.105275  355334 main.go:141] libmachine: (multinode-962345-m02) Ensuring network mk-multinode-962345 is active
	I0108 21:26:49.105612  355334 main.go:141] libmachine: (multinode-962345-m02) Getting domain xml...
	I0108 21:26:49.106209  355334 main.go:141] libmachine: (multinode-962345-m02) Creating domain...
	I0108 21:26:50.352611  355334 main.go:141] libmachine: (multinode-962345-m02) Waiting to get IP...
	I0108 21:26:50.353461  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:50.353850  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:50.353930  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:50.353856  355698 retry.go:31] will retry after 274.85107ms: waiting for machine to come up
	I0108 21:26:50.630382  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:50.630725  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:50.630766  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:50.630668  355698 retry.go:31] will retry after 328.778158ms: waiting for machine to come up
	I0108 21:26:50.961192  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:50.961628  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:50.961661  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:50.961578  355698 retry.go:31] will retry after 376.779481ms: waiting for machine to come up
	I0108 21:26:51.340174  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:51.340611  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:51.340638  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:51.340551  355698 retry.go:31] will retry after 425.963398ms: waiting for machine to come up
	I0108 21:26:51.768267  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:51.768741  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:51.768775  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:51.768669  355698 retry.go:31] will retry after 559.854173ms: waiting for machine to come up
	I0108 21:26:52.330447  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:52.330915  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:52.330944  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:52.330874  355698 retry.go:31] will retry after 945.695175ms: waiting for machine to come up
	I0108 21:26:53.278462  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:53.278810  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:53.278841  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:53.278753  355698 retry.go:31] will retry after 1.052333203s: waiting for machine to come up
	I0108 21:26:54.332854  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:54.333125  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:54.333164  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:54.333069  355698 retry.go:31] will retry after 999.654188ms: waiting for machine to come up
	I0108 21:26:55.334155  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:55.334566  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:55.334590  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:55.334514  355698 retry.go:31] will retry after 1.575834862s: waiting for machine to come up
	I0108 21:26:56.911430  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:56.911870  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:56.911891  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:56.911831  355698 retry.go:31] will retry after 1.828903239s: waiting for machine to come up
	I0108 21:26:58.742017  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:26:58.742512  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:26:58.742549  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:26:58.742440  355698 retry.go:31] will retry after 2.358069025s: waiting for machine to come up
	I0108 21:27:01.102359  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:01.102786  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:27:01.102816  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:27:01.102723  355698 retry.go:31] will retry after 2.893533347s: waiting for machine to come up
	I0108 21:27:03.998080  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:03.998650  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:27:03.998680  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:27:03.998583  355698 retry.go:31] will retry after 3.110138933s: waiting for machine to come up
	I0108 21:27:07.112442  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:07.112960  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find current IP address of domain multinode-962345-m02 in network mk-multinode-962345
	I0108 21:27:07.112985  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | I0108 21:27:07.112889  355698 retry.go:31] will retry after 4.407496311s: waiting for machine to come up
	I0108 21:27:11.524305  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.524754  355334 main.go:141] libmachine: (multinode-962345-m02) Found IP for machine: 192.168.39.111
	I0108 21:27:11.524779  355334 main.go:141] libmachine: (multinode-962345-m02) Reserving static IP address...
	I0108 21:27:11.524796  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.525235  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | unable to find host DHCP lease matching {name: "multinode-962345-m02", mac: "52:54:00:3b:b0:38", ip: "192.168.39.111"} in network mk-multinode-962345
	I0108 21:27:11.597674  355334 main.go:141] libmachine: (multinode-962345-m02) Reserved static IP address: 192.168.39.111
	I0108 21:27:11.597710  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Getting to WaitForSSH function...
	I0108 21:27:11.597719  355334 main.go:141] libmachine: (multinode-962345-m02) Waiting for SSH to be available...
	I0108 21:27:11.600307  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.600787  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:11.600820  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.600993  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Using SSH client type: external
	I0108 21:27:11.601017  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa (-rw-------)
	I0108 21:27:11.601052  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:27:11.601067  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | About to run SSH command:
	I0108 21:27:11.601106  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | exit 0
	I0108 21:27:11.691177  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | SSH cmd err, output: <nil>: 
	I0108 21:27:11.691505  355334 main.go:141] libmachine: (multinode-962345-m02) KVM machine creation complete!
	I0108 21:27:11.691855  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetConfigRaw
	I0108 21:27:11.692591  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:27:11.692830  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:27:11.692982  355334 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 21:27:11.693001  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetState
	I0108 21:27:11.694358  355334 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 21:27:11.694374  355334 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 21:27:11.694381  355334 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 21:27:11.694388  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:11.696753  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.697092  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:11.697123  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.697272  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:11.697462  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:11.697631  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:11.697781  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:11.697939  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:11.698462  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:27:11.698481  355334 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 21:27:11.810923  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:27:11.810953  355334 main.go:141] libmachine: Detecting the provisioner...
	I0108 21:27:11.810962  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:11.814160  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.814543  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:11.814583  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.814709  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:11.814955  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:11.815136  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:11.815312  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:11.815560  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:11.815942  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:27:11.815955  355334 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 21:27:11.928127  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 21:27:11.928227  355334 main.go:141] libmachine: found compatible host: buildroot
	I0108 21:27:11.928237  355334 main.go:141] libmachine: Provisioning with buildroot...
	I0108 21:27:11.928246  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetMachineName
	I0108 21:27:11.928561  355334 buildroot.go:166] provisioning hostname "multinode-962345-m02"
	I0108 21:27:11.928589  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetMachineName
	I0108 21:27:11.928820  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:11.931536  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.931900  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:11.931929  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:11.932048  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:11.932253  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:11.932415  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:11.932604  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:11.932765  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:11.933169  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:27:11.933184  355334 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-962345-m02 && echo "multinode-962345-m02" | sudo tee /etc/hostname
	I0108 21:27:12.059748  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-962345-m02
	
	I0108 21:27:12.059786  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:12.062688  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.063087  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.063119  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.063322  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:12.063585  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:12.063767  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:12.063925  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:12.064104  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:12.064479  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:27:12.064498  355334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-962345-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-962345-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-962345-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:27:12.184131  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:27:12.184169  355334 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 21:27:12.184230  355334 buildroot.go:174] setting up certificates
	I0108 21:27:12.184246  355334 provision.go:83] configureAuth start
	I0108 21:27:12.184267  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetMachineName
	I0108 21:27:12.184626  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetIP
	I0108 21:27:12.187443  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.187854  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.187877  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.188032  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:12.190613  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.190907  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.190938  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.191030  355334 provision.go:138] copyHostCerts
	I0108 21:27:12.191065  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:27:12.191100  355334 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 21:27:12.191109  355334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:27:12.191172  355334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 21:27:12.191306  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:27:12.191330  355334 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 21:27:12.191335  355334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:27:12.191393  355334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 21:27:12.191457  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:27:12.191475  355334 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 21:27:12.191480  355334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:27:12.191502  355334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 21:27:12.191548  355334 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.multinode-962345-m02 san=[192.168.39.111 192.168.39.111 localhost 127.0.0.1 minikube multinode-962345-m02]
	I0108 21:27:12.326032  355334 provision.go:172] copyRemoteCerts
	I0108 21:27:12.326095  355334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:27:12.326122  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:12.328785  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.329088  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.329120  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.329279  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:12.329491  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:12.329654  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:12.329831  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:27:12.416519  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:27:12.416615  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:27:12.441553  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:27:12.441623  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:27:12.464257  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:27:12.464324  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:27:12.490568  355334 provision.go:86] duration metric: configureAuth took 306.304586ms
	I0108 21:27:12.490598  355334 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:27:12.490803  355334 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:27:12.490898  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:12.494122  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.494549  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.494583  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.494780  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:12.494978  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:12.495188  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:12.495336  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:12.495550  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:12.495995  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:27:12.496021  355334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:27:12.805025  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:27:12.805060  355334 main.go:141] libmachine: Checking connection to Docker...
	I0108 21:27:12.805077  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetURL
	I0108 21:27:12.806477  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | Using libvirt version 6000000
	I0108 21:27:12.809116  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.809531  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.809561  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.809721  355334 main.go:141] libmachine: Docker is up and running!
	I0108 21:27:12.809740  355334 main.go:141] libmachine: Reticulating splines...
	I0108 21:27:12.809747  355334 client.go:171] LocalClient.Create took 24.137750917s
	I0108 21:27:12.809782  355334 start.go:167] duration metric: libmachine.API.Create for "multinode-962345" took 24.137833691s
	I0108 21:27:12.809797  355334 start.go:300] post-start starting for "multinode-962345-m02" (driver="kvm2")
	I0108 21:27:12.809812  355334 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:27:12.809836  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:27:12.810096  355334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:27:12.810116  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:12.812390  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.812780  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.812809  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.812957  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:12.813133  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:12.813295  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:12.813507  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:27:12.901659  355334 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:27:12.905755  355334 command_runner.go:130] > NAME=Buildroot
	I0108 21:27:12.905781  355334 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0108 21:27:12.905787  355334 command_runner.go:130] > ID=buildroot
	I0108 21:27:12.905797  355334 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:27:12.905803  355334 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:27:12.905981  355334 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:27:12.906001  355334 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 21:27:12.906091  355334 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 21:27:12.906179  355334 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 21:27:12.906190  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /etc/ssl/certs/3419822.pem
	I0108 21:27:12.906299  355334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:27:12.915791  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:27:12.937520  355334 start.go:303] post-start completed in 127.703948ms
	I0108 21:27:12.937584  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetConfigRaw
	I0108 21:27:12.938196  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetIP
	I0108 21:27:12.940854  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.941158  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.941198  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.941413  355334 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:27:12.941590  355334 start.go:128] duration metric: createHost completed in 24.287692282s
	I0108 21:27:12.941613  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:12.943696  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.943997  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:12.944024  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:12.944149  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:12.944353  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:12.944609  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:12.944790  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:12.944943  355334 main.go:141] libmachine: Using SSH client type: native
	I0108 21:27:12.945397  355334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:27:12.945416  355334 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:27:13.056381  355334 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704749233.024894551
	
	I0108 21:27:13.056421  355334 fix.go:206] guest clock: 1704749233.024894551
	I0108 21:27:13.056436  355334 fix.go:219] Guest: 2024-01-08 21:27:13.024894551 +0000 UTC Remote: 2024-01-08 21:27:12.941601858 +0000 UTC m=+89.779609337 (delta=83.292693ms)
	I0108 21:27:13.056456  355334 fix.go:190] guest clock delta is within tolerance: 83.292693ms
	I0108 21:27:13.056463  355334 start.go:83] releasing machines lock for "multinode-962345-m02", held for 24.402639545s
	I0108 21:27:13.056492  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:27:13.056862  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetIP
	I0108 21:27:13.060235  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:13.060651  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:13.060675  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:13.063430  355334 out.go:177] * Found network options:
	I0108 21:27:13.065068  355334 out.go:177]   - NO_PROXY=192.168.39.239
	W0108 21:27:13.066458  355334 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:27:13.066513  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:27:13.067213  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:27:13.067449  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:27:13.067537  355334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:27:13.067580  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	W0108 21:27:13.067642  355334 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:27:13.067703  355334 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:27:13.067718  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:27:13.070421  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:13.070582  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:13.070843  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:13.070880  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:13.071007  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:13.071046  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:13.071063  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:13.071179  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:27:13.071262  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:13.071331  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:27:13.071400  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:13.071461  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:27:13.071568  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:27:13.071624  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:27:13.308925  355334 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:27:13.309098  355334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:27:13.315284  355334 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 21:27:13.315686  355334 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:27:13.315760  355334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:27:13.331813  355334 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:27:13.332232  355334 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:27:13.332252  355334 start.go:475] detecting cgroup driver to use...
	I0108 21:27:13.332319  355334 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:27:13.347681  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:27:13.362197  355334 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:27:13.362275  355334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:27:13.376992  355334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:27:13.391192  355334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:27:13.405340  355334 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 21:27:13.498830  355334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:27:13.512462  355334 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 21:27:13.613206  355334 docker.go:219] disabling docker service ...
	I0108 21:27:13.613284  355334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:27:13.627433  355334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:27:13.639567  355334 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 21:27:13.639684  355334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:27:13.754748  355334 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 21:27:13.754845  355334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:27:13.767339  355334 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 21:27:13.767709  355334 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 21:27:13.870805  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:27:13.883675  355334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:27:13.900714  355334 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 21:27:13.900768  355334 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:27:13.900845  355334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:27:13.910217  355334 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:27:13.910305  355334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:27:13.919810  355334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:27:13.929138  355334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:27:13.938117  355334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:27:13.947150  355334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:27:13.955326  355334 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:27:13.955374  355334 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:27:13.955423  355334 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 21:27:13.969011  355334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:27:13.977886  355334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:27:14.103713  355334 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:27:14.282800  355334 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:27:14.282894  355334 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:27:14.287746  355334 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 21:27:14.287777  355334 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:27:14.287790  355334 command_runner.go:130] > Device: 16h/22d	Inode: 705         Links: 1
	I0108 21:27:14.287801  355334 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:27:14.287813  355334 command_runner.go:130] > Access: 2024-01-08 21:27:14.240564994 +0000
	I0108 21:27:14.287823  355334 command_runner.go:130] > Modify: 2024-01-08 21:27:14.240564994 +0000
	I0108 21:27:14.287834  355334 command_runner.go:130] > Change: 2024-01-08 21:27:14.240564994 +0000
	I0108 21:27:14.287845  355334 command_runner.go:130] >  Birth: -
	I0108 21:27:14.287869  355334 start.go:543] Will wait 60s for crictl version
	I0108 21:27:14.287921  355334 ssh_runner.go:195] Run: which crictl
	I0108 21:27:14.291822  355334 command_runner.go:130] > /usr/bin/crictl
	I0108 21:27:14.291897  355334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:27:14.332570  355334 command_runner.go:130] > Version:  0.1.0
	I0108 21:27:14.332595  355334 command_runner.go:130] > RuntimeName:  cri-o
	I0108 21:27:14.332602  355334 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 21:27:14.332610  355334 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:27:14.334333  355334 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:27:14.334417  355334 ssh_runner.go:195] Run: crio --version
	I0108 21:27:14.386858  355334 command_runner.go:130] > crio version 1.24.1
	I0108 21:27:14.386889  355334 command_runner.go:130] > Version:          1.24.1
	I0108 21:27:14.386900  355334 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:27:14.386906  355334 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:27:14.386915  355334 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:27:14.386923  355334 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:27:14.386930  355334 command_runner.go:130] > Compiler:         gc
	I0108 21:27:14.386938  355334 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:27:14.386946  355334 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:27:14.386962  355334 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:27:14.386973  355334 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:27:14.386981  355334 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:27:14.387076  355334 ssh_runner.go:195] Run: crio --version
	I0108 21:27:14.435680  355334 command_runner.go:130] > crio version 1.24.1
	I0108 21:27:14.435721  355334 command_runner.go:130] > Version:          1.24.1
	I0108 21:27:14.435729  355334 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:27:14.435734  355334 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:27:14.435742  355334 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:27:14.435747  355334 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:27:14.435751  355334 command_runner.go:130] > Compiler:         gc
	I0108 21:27:14.435755  355334 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:27:14.435760  355334 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:27:14.435767  355334 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:27:14.435771  355334 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:27:14.435778  355334 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:27:14.438141  355334 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:27:14.439666  355334 out.go:177]   - env NO_PROXY=192.168.39.239
	I0108 21:27:14.441017  355334 main.go:141] libmachine: (multinode-962345-m02) Calling .GetIP
	I0108 21:27:14.443941  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:14.444386  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:27:14.444422  355334 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:27:14.444619  355334 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:27:14.449433  355334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:27:14.461484  355334 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345 for IP: 192.168.39.111
	I0108 21:27:14.461522  355334 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:27:14.461719  355334 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 21:27:14.461787  355334 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 21:27:14.461806  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:27:14.461830  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:27:14.461853  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:27:14.461870  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:27:14.461945  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 21:27:14.461987  355334 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 21:27:14.462004  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:27:14.462042  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:27:14.462075  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:27:14.462109  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 21:27:14.462165  355334 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:27:14.462241  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem -> /usr/share/ca-certificates/341982.pem
	I0108 21:27:14.462264  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /usr/share/ca-certificates/3419822.pem
	I0108 21:27:14.462285  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:14.462900  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:27:14.485266  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:27:14.507131  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:27:14.528700  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:27:14.549987  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 21:27:14.571009  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 21:27:14.592616  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:27:14.614880  355334 ssh_runner.go:195] Run: openssl version
	I0108 21:27:14.620300  355334 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:27:14.620688  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 21:27:14.631287  355334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 21:27:14.635709  355334 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:27:14.635925  355334 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:27:14.635977  355334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 21:27:14.640950  355334 command_runner.go:130] > 51391683
	I0108 21:27:14.641175  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 21:27:14.651623  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 21:27:14.662005  355334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 21:27:14.666505  355334 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:27:14.666758  355334 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:27:14.666801  355334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 21:27:14.671754  355334 command_runner.go:130] > 3ec20f2e
	I0108 21:27:14.671964  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:27:14.681883  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:27:14.691935  355334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:14.696702  355334 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:14.696728  355334 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:14.696770  355334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:27:14.701864  355334 command_runner.go:130] > b5213941
	I0108 21:27:14.702172  355334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:27:14.712582  355334 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:27:14.716804  355334 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:27:14.716850  355334 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:27:14.716937  355334 ssh_runner.go:195] Run: crio config
	I0108 21:27:14.771161  355334 command_runner.go:130] ! time="2024-01-08 21:27:14.743047341Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 21:27:14.771324  355334 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 21:27:14.783438  355334 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 21:27:14.783462  355334 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 21:27:14.783468  355334 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 21:27:14.783471  355334 command_runner.go:130] > #
	I0108 21:27:14.783477  355334 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 21:27:14.783490  355334 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 21:27:14.783496  355334 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 21:27:14.783534  355334 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 21:27:14.783545  355334 command_runner.go:130] > # reload'.
	I0108 21:27:14.783551  355334 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 21:27:14.783557  355334 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 21:27:14.783563  355334 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 21:27:14.783569  355334 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 21:27:14.783575  355334 command_runner.go:130] > [crio]
	I0108 21:27:14.783580  355334 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 21:27:14.783585  355334 command_runner.go:130] > # containers images, in this directory.
	I0108 21:27:14.783591  355334 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 21:27:14.783601  355334 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 21:27:14.783608  355334 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 21:27:14.783614  355334 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 21:27:14.783622  355334 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 21:27:14.783626  355334 command_runner.go:130] > storage_driver = "overlay"
	I0108 21:27:14.783632  355334 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 21:27:14.783639  355334 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 21:27:14.783646  355334 command_runner.go:130] > storage_option = [
	I0108 21:27:14.783651  355334 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 21:27:14.783655  355334 command_runner.go:130] > ]
	I0108 21:27:14.783662  355334 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 21:27:14.783668  355334 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 21:27:14.783673  355334 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 21:27:14.783681  355334 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 21:27:14.783687  355334 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 21:27:14.783694  355334 command_runner.go:130] > # always happen on a node reboot
	I0108 21:27:14.783699  355334 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 21:27:14.783706  355334 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 21:27:14.783713  355334 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 21:27:14.783724  355334 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 21:27:14.783731  355334 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 21:27:14.783739  355334 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 21:27:14.783749  355334 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 21:27:14.783755  355334 command_runner.go:130] > # internal_wipe = true
	I0108 21:27:14.783762  355334 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 21:27:14.783770  355334 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 21:27:14.783776  355334 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 21:27:14.783784  355334 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 21:27:14.783790  355334 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 21:27:14.783796  355334 command_runner.go:130] > [crio.api]
	I0108 21:27:14.783802  355334 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 21:27:14.783808  355334 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 21:27:14.783814  355334 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 21:27:14.783825  355334 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 21:27:14.783834  355334 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 21:27:14.783841  355334 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 21:27:14.783848  355334 command_runner.go:130] > # stream_port = "0"
	I0108 21:27:14.783856  355334 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 21:27:14.783863  355334 command_runner.go:130] > # stream_enable_tls = false
	I0108 21:27:14.783869  355334 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 21:27:14.783876  355334 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 21:27:14.783882  355334 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 21:27:14.783890  355334 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 21:27:14.783896  355334 command_runner.go:130] > # minutes.
	I0108 21:27:14.783901  355334 command_runner.go:130] > # stream_tls_cert = ""
	I0108 21:27:14.783917  355334 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 21:27:14.783926  355334 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 21:27:14.783936  355334 command_runner.go:130] > # stream_tls_key = ""
	I0108 21:27:14.783947  355334 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 21:27:14.783960  355334 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 21:27:14.783971  355334 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 21:27:14.783980  355334 command_runner.go:130] > # stream_tls_ca = ""
	I0108 21:27:14.783999  355334 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:27:14.784009  355334 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 21:27:14.784023  355334 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:27:14.784033  355334 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 21:27:14.784061  355334 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 21:27:14.784075  355334 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 21:27:14.784082  355334 command_runner.go:130] > [crio.runtime]
	I0108 21:27:14.784092  355334 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 21:27:14.784102  355334 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 21:27:14.784112  355334 command_runner.go:130] > # "nofile=1024:2048"
	I0108 21:27:14.784124  355334 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 21:27:14.784136  355334 command_runner.go:130] > # default_ulimits = [
	I0108 21:27:14.784141  355334 command_runner.go:130] > # ]
	I0108 21:27:14.784148  355334 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 21:27:14.784152  355334 command_runner.go:130] > # no_pivot = false
	I0108 21:27:14.784158  355334 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 21:27:14.784164  355334 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 21:27:14.784169  355334 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 21:27:14.784174  355334 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 21:27:14.784181  355334 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 21:27:14.784188  355334 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:27:14.784195  355334 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 21:27:14.784200  355334 command_runner.go:130] > # Cgroup setting for conmon
	I0108 21:27:14.784206  355334 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 21:27:14.784212  355334 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 21:27:14.784219  355334 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 21:27:14.784228  355334 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 21:27:14.784235  355334 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:27:14.784241  355334 command_runner.go:130] > conmon_env = [
	I0108 21:27:14.784247  355334 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 21:27:14.784253  355334 command_runner.go:130] > ]
	I0108 21:27:14.784258  355334 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 21:27:14.784265  355334 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 21:27:14.784271  355334 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 21:27:14.784277  355334 command_runner.go:130] > # default_env = [
	I0108 21:27:14.784281  355334 command_runner.go:130] > # ]
	I0108 21:27:14.784287  355334 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 21:27:14.784293  355334 command_runner.go:130] > # selinux = false
	I0108 21:27:14.784299  355334 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 21:27:14.784307  355334 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 21:27:14.784314  355334 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 21:27:14.784324  355334 command_runner.go:130] > # seccomp_profile = ""
	I0108 21:27:14.784333  355334 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 21:27:14.784346  355334 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 21:27:14.784361  355334 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 21:27:14.784372  355334 command_runner.go:130] > # which might increase security.
	I0108 21:27:14.784380  355334 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 21:27:14.784392  355334 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 21:27:14.784405  355334 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 21:27:14.784418  355334 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 21:27:14.784430  355334 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 21:27:14.784438  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:14.784443  355334 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 21:27:14.784450  355334 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 21:27:14.784455  355334 command_runner.go:130] > # the cgroup blockio controller.
	I0108 21:27:14.784462  355334 command_runner.go:130] > # blockio_config_file = ""
	I0108 21:27:14.784468  355334 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 21:27:14.784474  355334 command_runner.go:130] > # irqbalance daemon.
	I0108 21:27:14.784480  355334 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 21:27:14.784489  355334 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 21:27:14.784494  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:14.784501  355334 command_runner.go:130] > # rdt_config_file = ""
	I0108 21:27:14.784506  355334 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 21:27:14.784513  355334 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 21:27:14.784519  355334 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 21:27:14.784525  355334 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 21:27:14.784532  355334 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 21:27:14.784538  355334 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 21:27:14.784544  355334 command_runner.go:130] > # will be added.
	I0108 21:27:14.784549  355334 command_runner.go:130] > # default_capabilities = [
	I0108 21:27:14.784555  355334 command_runner.go:130] > # 	"CHOWN",
	I0108 21:27:14.784559  355334 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 21:27:14.784563  355334 command_runner.go:130] > # 	"FSETID",
	I0108 21:27:14.784567  355334 command_runner.go:130] > # 	"FOWNER",
	I0108 21:27:14.784571  355334 command_runner.go:130] > # 	"SETGID",
	I0108 21:27:14.784577  355334 command_runner.go:130] > # 	"SETUID",
	I0108 21:27:14.784581  355334 command_runner.go:130] > # 	"SETPCAP",
	I0108 21:27:14.784587  355334 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 21:27:14.784591  355334 command_runner.go:130] > # 	"KILL",
	I0108 21:27:14.784597  355334 command_runner.go:130] > # ]
	I0108 21:27:14.784604  355334 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 21:27:14.784612  355334 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:27:14.784616  355334 command_runner.go:130] > # default_sysctls = [
	I0108 21:27:14.784624  355334 command_runner.go:130] > # ]
	I0108 21:27:14.784631  355334 command_runner.go:130] > # List of devices on the host that a
	I0108 21:27:14.784644  355334 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 21:27:14.784654  355334 command_runner.go:130] > # allowed_devices = [
	I0108 21:27:14.784663  355334 command_runner.go:130] > # 	"/dev/fuse",
	I0108 21:27:14.784670  355334 command_runner.go:130] > # ]
	I0108 21:27:14.784679  355334 command_runner.go:130] > # List of additional devices. specified as
	I0108 21:27:14.784693  355334 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 21:27:14.784704  355334 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 21:27:14.784731  355334 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:27:14.784742  355334 command_runner.go:130] > # additional_devices = [
	I0108 21:27:14.784747  355334 command_runner.go:130] > # ]
	I0108 21:27:14.784753  355334 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 21:27:14.784760  355334 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 21:27:14.784764  355334 command_runner.go:130] > # 	"/etc/cdi",
	I0108 21:27:14.784771  355334 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 21:27:14.784775  355334 command_runner.go:130] > # ]
	I0108 21:27:14.784781  355334 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 21:27:14.784791  355334 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 21:27:14.784797  355334 command_runner.go:130] > # Defaults to false.
	I0108 21:27:14.784803  355334 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 21:27:14.784809  355334 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 21:27:14.784818  355334 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 21:27:14.784822  355334 command_runner.go:130] > # hooks_dir = [
	I0108 21:27:14.784827  355334 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 21:27:14.784831  355334 command_runner.go:130] > # ]
	I0108 21:27:14.784840  355334 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 21:27:14.784846  355334 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 21:27:14.784854  355334 command_runner.go:130] > # its default mounts from the following two files:
	I0108 21:27:14.784858  355334 command_runner.go:130] > #
	I0108 21:27:14.784867  355334 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 21:27:14.784873  355334 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 21:27:14.784881  355334 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 21:27:14.784885  355334 command_runner.go:130] > #
	I0108 21:27:14.784892  355334 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 21:27:14.784898  355334 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 21:27:14.784906  355334 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 21:27:14.784912  355334 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 21:27:14.784917  355334 command_runner.go:130] > #
	I0108 21:27:14.784922  355334 command_runner.go:130] > # default_mounts_file = ""
	I0108 21:27:14.784930  355334 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 21:27:14.784937  355334 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 21:27:14.784943  355334 command_runner.go:130] > pids_limit = 1024
	I0108 21:27:14.784948  355334 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 21:27:14.784955  355334 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 21:27:14.784962  355334 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 21:27:14.784972  355334 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 21:27:14.784976  355334 command_runner.go:130] > # log_size_max = -1
	I0108 21:27:14.784986  355334 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 21:27:14.784993  355334 command_runner.go:130] > # log_to_journald = false
	I0108 21:27:14.784999  355334 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 21:27:14.785006  355334 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 21:27:14.785011  355334 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 21:27:14.785017  355334 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 21:27:14.785022  355334 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 21:27:14.785026  355334 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 21:27:14.785032  355334 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 21:27:14.785041  355334 command_runner.go:130] > # read_only = false
	I0108 21:27:14.785051  355334 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 21:27:14.785064  355334 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 21:27:14.785075  355334 command_runner.go:130] > # live configuration reload.
	I0108 21:27:14.785084  355334 command_runner.go:130] > # log_level = "info"
	I0108 21:27:14.785094  355334 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 21:27:14.785104  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:14.785112  355334 command_runner.go:130] > # log_filter = ""
	I0108 21:27:14.785125  355334 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 21:27:14.785137  355334 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 21:27:14.785147  355334 command_runner.go:130] > # separated by comma.
	I0108 21:27:14.785155  355334 command_runner.go:130] > # uid_mappings = ""
	I0108 21:27:14.785169  355334 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 21:27:14.785181  355334 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 21:27:14.785192  355334 command_runner.go:130] > # separated by comma.
	I0108 21:27:14.785199  355334 command_runner.go:130] > # gid_mappings = ""
	I0108 21:27:14.785206  355334 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 21:27:14.785219  355334 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:27:14.785232  355334 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:27:14.785243  355334 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 21:27:14.785255  355334 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 21:27:14.785268  355334 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:27:14.785280  355334 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:27:14.785291  355334 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 21:27:14.785304  355334 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 21:27:14.785316  355334 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 21:27:14.785329  355334 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 21:27:14.785339  355334 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 21:27:14.785349  355334 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 21:27:14.785362  355334 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 21:27:14.785373  355334 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 21:27:14.785384  355334 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 21:27:14.785393  355334 command_runner.go:130] > drop_infra_ctr = false
	I0108 21:27:14.785406  355334 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 21:27:14.785418  355334 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 21:27:14.785432  355334 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 21:27:14.785442  355334 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 21:27:14.785453  355334 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 21:27:14.785464  355334 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 21:27:14.785472  355334 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 21:27:14.785483  355334 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 21:27:14.785534  355334 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 21:27:14.785570  355334 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 21:27:14.785580  355334 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 21:27:14.785593  355334 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 21:27:14.785602  355334 command_runner.go:130] > # default_runtime = "runc"
	I0108 21:27:14.785611  355334 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 21:27:14.785627  355334 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 21:27:14.785648  355334 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 21:27:14.785661  355334 command_runner.go:130] > # creation as a file is not desired either.
	I0108 21:27:14.785674  355334 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 21:27:14.785685  355334 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 21:27:14.785694  355334 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 21:27:14.785702  355334 command_runner.go:130] > # ]
	I0108 21:27:14.785715  355334 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 21:27:14.785728  355334 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 21:27:14.785742  355334 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 21:27:14.785755  355334 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 21:27:14.785763  355334 command_runner.go:130] > #
	I0108 21:27:14.785772  355334 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 21:27:14.785783  355334 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 21:27:14.785793  355334 command_runner.go:130] > #  runtime_type = "oci"
	I0108 21:27:14.785805  355334 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 21:27:14.785817  355334 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 21:27:14.785827  355334 command_runner.go:130] > #  allowed_annotations = []
	I0108 21:27:14.785833  355334 command_runner.go:130] > # Where:
	I0108 21:27:14.785844  355334 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 21:27:14.785856  355334 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 21:27:14.785869  355334 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 21:27:14.785891  355334 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 21:27:14.785899  355334 command_runner.go:130] > #   in $PATH.
	I0108 21:27:14.785908  355334 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 21:27:14.785918  355334 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 21:27:14.785929  355334 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 21:27:14.785935  355334 command_runner.go:130] > #   state.
	I0108 21:27:14.785944  355334 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 21:27:14.785955  355334 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 21:27:14.785966  355334 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 21:27:14.785978  355334 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 21:27:14.785988  355334 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 21:27:14.785999  355334 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 21:27:14.786007  355334 command_runner.go:130] > #   The currently recognized values are:
	I0108 21:27:14.786020  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 21:27:14.786034  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 21:27:14.786095  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 21:27:14.786104  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 21:27:14.786114  355334 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 21:27:14.786124  355334 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 21:27:14.786137  355334 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 21:27:14.786148  355334 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 21:27:14.786159  355334 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 21:27:14.786165  355334 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 21:27:14.786174  355334 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 21:27:14.786179  355334 command_runner.go:130] > runtime_type = "oci"
	I0108 21:27:14.786187  355334 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 21:27:14.786197  355334 command_runner.go:130] > runtime_config_path = ""
	I0108 21:27:14.786203  355334 command_runner.go:130] > monitor_path = ""
	I0108 21:27:14.786212  355334 command_runner.go:130] > monitor_cgroup = ""
	I0108 21:27:14.786218  355334 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 21:27:14.786229  355334 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 21:27:14.786238  355334 command_runner.go:130] > # running containers
	I0108 21:27:14.786246  355334 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 21:27:14.786261  355334 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 21:27:14.786300  355334 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 21:27:14.786313  355334 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 21:27:14.786322  355334 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 21:27:14.786332  355334 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 21:27:14.786340  355334 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 21:27:14.786350  355334 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 21:27:14.786359  355334 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 21:27:14.786370  355334 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 21:27:14.786381  355334 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 21:27:14.786392  355334 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 21:27:14.786404  355334 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 21:27:14.786426  355334 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 21:27:14.786441  355334 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 21:27:14.786453  355334 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 21:27:14.786468  355334 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 21:27:14.786482  355334 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 21:27:14.786494  355334 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 21:27:14.786508  355334 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 21:27:14.786517  355334 command_runner.go:130] > # Example:
	I0108 21:27:14.786526  355334 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 21:27:14.786537  355334 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 21:27:14.786548  355334 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 21:27:14.786560  355334 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 21:27:14.786569  355334 command_runner.go:130] > # cpuset = 0
	I0108 21:27:14.786577  355334 command_runner.go:130] > # cpushares = "0-1"
	I0108 21:27:14.786584  355334 command_runner.go:130] > # Where:
	I0108 21:27:14.786591  355334 command_runner.go:130] > # The workload name is workload-type.
	I0108 21:27:14.786605  355334 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 21:27:14.786616  355334 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 21:27:14.786625  355334 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 21:27:14.786641  355334 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 21:27:14.786653  355334 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 21:27:14.786662  355334 command_runner.go:130] > # 
	I0108 21:27:14.786685  355334 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 21:27:14.786694  355334 command_runner.go:130] > #
	I0108 21:27:14.786704  355334 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 21:27:14.786716  355334 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 21:27:14.786729  355334 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 21:27:14.786742  355334 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 21:27:14.786756  355334 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 21:27:14.786765  355334 command_runner.go:130] > [crio.image]
	I0108 21:27:14.786778  355334 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 21:27:14.786788  355334 command_runner.go:130] > # default_transport = "docker://"
	I0108 21:27:14.786802  355334 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 21:27:14.786815  355334 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:27:14.786824  355334 command_runner.go:130] > # global_auth_file = ""
	I0108 21:27:14.786835  355334 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 21:27:14.786846  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:14.786857  355334 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 21:27:14.786868  355334 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 21:27:14.786877  355334 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:27:14.786884  355334 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:27:14.786889  355334 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 21:27:14.786898  355334 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 21:27:14.786907  355334 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 21:27:14.786917  355334 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 21:27:14.786926  355334 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 21:27:14.786932  355334 command_runner.go:130] > # pause_command = "/pause"
	I0108 21:27:14.786939  355334 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 21:27:14.786948  355334 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 21:27:14.786956  355334 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 21:27:14.786965  355334 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 21:27:14.786971  355334 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 21:27:14.786977  355334 command_runner.go:130] > # signature_policy = ""
	I0108 21:27:14.786983  355334 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 21:27:14.786992  355334 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 21:27:14.786998  355334 command_runner.go:130] > # changing them here.
	I0108 21:27:14.787003  355334 command_runner.go:130] > # insecure_registries = [
	I0108 21:27:14.787008  355334 command_runner.go:130] > # ]
	I0108 21:27:14.787018  355334 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 21:27:14.787026  355334 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 21:27:14.787031  355334 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 21:27:14.787046  355334 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 21:27:14.787051  355334 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 21:27:14.787058  355334 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 21:27:14.787064  355334 command_runner.go:130] > # CNI plugins.
	I0108 21:27:14.787069  355334 command_runner.go:130] > [crio.network]
	I0108 21:27:14.787077  355334 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 21:27:14.787084  355334 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 21:27:14.787089  355334 command_runner.go:130] > # cni_default_network = ""
	I0108 21:27:14.787098  355334 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 21:27:14.787105  355334 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 21:27:14.787114  355334 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 21:27:14.787120  355334 command_runner.go:130] > # plugin_dirs = [
	I0108 21:27:14.787125  355334 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 21:27:14.787130  355334 command_runner.go:130] > # ]
	I0108 21:27:14.787136  355334 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 21:27:14.787142  355334 command_runner.go:130] > [crio.metrics]
	I0108 21:27:14.787147  355334 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 21:27:14.787154  355334 command_runner.go:130] > enable_metrics = true
	I0108 21:27:14.787166  355334 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 21:27:14.787172  355334 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 21:27:14.787181  355334 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 21:27:14.787213  355334 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 21:27:14.787222  355334 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 21:27:14.787227  355334 command_runner.go:130] > # metrics_collectors = [
	I0108 21:27:14.787233  355334 command_runner.go:130] > # 	"operations",
	I0108 21:27:14.787238  355334 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 21:27:14.787245  355334 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 21:27:14.787249  355334 command_runner.go:130] > # 	"operations_errors",
	I0108 21:27:14.787255  355334 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 21:27:14.787260  355334 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 21:27:14.787267  355334 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 21:27:14.787271  355334 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 21:27:14.787278  355334 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 21:27:14.787282  355334 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 21:27:14.787289  355334 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 21:27:14.787294  355334 command_runner.go:130] > # 	"containers_oom_total",
	I0108 21:27:14.787299  355334 command_runner.go:130] > # 	"containers_oom",
	I0108 21:27:14.787304  355334 command_runner.go:130] > # 	"processes_defunct",
	I0108 21:27:14.787310  355334 command_runner.go:130] > # 	"operations_total",
	I0108 21:27:14.787315  355334 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 21:27:14.787322  355334 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 21:27:14.787326  355334 command_runner.go:130] > # 	"operations_errors_total",
	I0108 21:27:14.787335  355334 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 21:27:14.787341  355334 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 21:27:14.787346  355334 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 21:27:14.787352  355334 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 21:27:14.787375  355334 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 21:27:14.787387  355334 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 21:27:14.787394  355334 command_runner.go:130] > # ]
	I0108 21:27:14.787399  355334 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 21:27:14.787406  355334 command_runner.go:130] > # metrics_port = 9090
	I0108 21:27:14.787429  355334 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 21:27:14.787436  355334 command_runner.go:130] > # metrics_socket = ""
	I0108 21:27:14.787443  355334 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 21:27:14.787451  355334 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 21:27:14.787458  355334 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 21:27:14.787465  355334 command_runner.go:130] > # certificate on any modification event.
	I0108 21:27:14.787469  355334 command_runner.go:130] > # metrics_cert = ""
	I0108 21:27:14.787477  355334 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 21:27:14.787484  355334 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 21:27:14.787489  355334 command_runner.go:130] > # metrics_key = ""
	I0108 21:27:14.787497  355334 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 21:27:14.787503  355334 command_runner.go:130] > [crio.tracing]
	I0108 21:27:14.787509  355334 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 21:27:14.787515  355334 command_runner.go:130] > # enable_tracing = false
	I0108 21:27:14.787526  355334 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 21:27:14.787533  355334 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 21:27:14.787540  355334 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 21:27:14.787547  355334 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 21:27:14.787554  355334 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 21:27:14.787560  355334 command_runner.go:130] > [crio.stats]
	I0108 21:27:14.787566  355334 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 21:27:14.787574  355334 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 21:27:14.787580  355334 command_runner.go:130] > # stats_collection_period = 0
	I0108 21:27:14.787655  355334 cni.go:84] Creating CNI manager for ""
	I0108 21:27:14.787665  355334 cni.go:136] 2 nodes found, recommending kindnet
	I0108 21:27:14.787675  355334 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:27:14.787695  355334 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.111 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-962345 NodeName:multinode-962345-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:27:14.787804  355334 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-962345-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:27:14.787864  355334 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-962345-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:27:14.787917  355334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:27:14.798371  355334 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0108 21:27:14.798432  355334 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0108 21:27:14.798499  355334 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0108 21:27:14.809143  355334 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0108 21:27:14.809167  355334 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0108 21:27:14.809151  355334 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0108 21:27:14.809351  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 21:27:14.809436  355334 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 21:27:14.813868  355334 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 21:27:14.813996  355334 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 21:27:14.814031  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0108 21:27:15.513380  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 21:27:15.513466  355334 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 21:27:15.518902  355334 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 21:27:15.519270  355334 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 21:27:15.519303  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0108 21:27:15.892400  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:27:15.906657  355334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 21:27:15.906750  355334 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 21:27:15.910991  355334 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 21:27:15.911028  355334 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 21:27:15.911050  355334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0108 21:27:16.410944  355334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 21:27:16.419903  355334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0108 21:27:16.436064  355334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:27:16.451273  355334 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0108 21:27:16.455309  355334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:27:16.468253  355334 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:27:16.468568  355334 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:27:16.468770  355334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:27:16.468806  355334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:27:16.483690  355334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0108 21:27:16.484104  355334 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:27:16.484516  355334 main.go:141] libmachine: Using API Version  1
	I0108 21:27:16.484539  355334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:27:16.484878  355334 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:27:16.485080  355334 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:27:16.485241  355334 start.go:304] JoinCluster: &{Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:27:16.485329  355334 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 21:27:16.485344  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:27:16.488359  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:27:16.488786  355334 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:27:16.488827  355334 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:27:16.488940  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:27:16.489127  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:27:16.489271  355334 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:27:16.489430  355334 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:27:16.660616  355334 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2mrers.e2amq7w848835apj --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 21:27:16.660695  355334 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:27:16.660733  355334 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2mrers.e2amq7w848835apj --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-962345-m02"
	I0108 21:27:16.705780  355334 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:27:16.850589  355334 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 21:27:16.850621  355334 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 21:27:16.894990  355334 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:27:16.895022  355334 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:27:16.895027  355334 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:27:17.022209  355334 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 21:27:19.536354  355334 command_runner.go:130] > This node has joined the cluster:
	I0108 21:27:19.536390  355334 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 21:27:19.536402  355334 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 21:27:19.536412  355334 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 21:27:19.538057  355334 command_runner.go:130] ! W0108 21:27:16.680442     824 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 21:27:19.538089  355334 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:27:19.538132  355334 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2mrers.e2amq7w848835apj --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-962345-m02": (2.877382712s)
	I0108 21:27:19.538161  355334 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 21:27:19.836260  355334 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0108 21:27:19.836377  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-962345 minikube.k8s.io/updated_at=2024_01_08T21_27_19_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:27:19.945839  355334 command_runner.go:130] > node/multinode-962345-m02 labeled
	I0108 21:27:19.947705  355334 start.go:306] JoinCluster complete in 3.462455921s
	I0108 21:27:19.947735  355334 cni.go:84] Creating CNI manager for ""
	I0108 21:27:19.947743  355334 cni.go:136] 2 nodes found, recommending kindnet
	I0108 21:27:19.947793  355334 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:27:19.953553  355334 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:27:19.953582  355334 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:27:19.953592  355334 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:27:19.953603  355334 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:27:19.953613  355334 command_runner.go:130] > Access: 2024-01-08 21:25:56.566172033 +0000
	I0108 21:27:19.953622  355334 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0108 21:27:19.953634  355334 command_runner.go:130] > Change: 2024-01-08 21:25:54.724172033 +0000
	I0108 21:27:19.953640  355334 command_runner.go:130] >  Birth: -
	I0108 21:27:19.953830  355334 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:27:19.953852  355334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:27:19.974201  355334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:27:20.286466  355334 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:27:20.292427  355334 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:27:20.297039  355334 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:27:20.313463  355334 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:27:20.318696  355334 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:27:20.318965  355334 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:27:20.319399  355334 round_trippers.go:463] GET https://192.168.39.239:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:27:20.319415  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:20.319423  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:20.319429  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:20.322271  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:20.322290  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:20.322297  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:20.322302  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:20.322307  355334 round_trippers.go:580]     Content-Length: 291
	I0108 21:27:20.322314  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:20 GMT
	I0108 21:27:20.322323  355334 round_trippers.go:580]     Audit-Id: 19584587-fdbf-4358-95d2-dc24ce92fdbf
	I0108 21:27:20.322331  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:20.322340  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:20.322364  355334 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9a0db73a-68c0-469b-b860-0baad5e41646","resourceVersion":"443","creationTimestamp":"2024-01-08T21:26:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:27:20.322466  355334 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-962345" context rescaled to 1 replicas
	I0108 21:27:20.322497  355334 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:27:20.324344  355334 out.go:177] * Verifying Kubernetes components...
	I0108 21:27:20.325840  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:27:20.350938  355334 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:27:20.351263  355334 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:27:20.351594  355334 node_ready.go:35] waiting up to 6m0s for node "multinode-962345-m02" to be "Ready" ...
	I0108 21:27:20.351700  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:20.351708  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:20.351720  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:20.351731  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:20.355000  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:20.355027  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:20.355035  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:20.355043  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:20.355051  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:20.355062  355334 round_trippers.go:580]     Content-Length: 4083
	I0108 21:27:20.355075  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:20 GMT
	I0108 21:27:20.355088  355334 round_trippers.go:580]     Audit-Id: b7eb1b20-04f7-4c8a-b689-09424d060008
	I0108 21:27:20.355102  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:20.355192  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"495","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I0108 21:27:20.852596  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:20.852623  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:20.852631  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:20.852638  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:20.856489  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:20.856524  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:20.856533  355334 round_trippers.go:580]     Content-Length: 4083
	I0108 21:27:20.856538  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:20 GMT
	I0108 21:27:20.856546  355334 round_trippers.go:580]     Audit-Id: 20cc58a2-cbcf-4222-8384-4551f49c807b
	I0108 21:27:20.856554  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:20.856563  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:20.856579  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:20.856600  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:20.856722  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"495","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I0108 21:27:21.351909  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:21.351935  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:21.351946  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:21.351954  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:21.354884  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:21.354904  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:21.354911  355334 round_trippers.go:580]     Audit-Id: 4cfbe880-da0a-4b08-b94f-a80deea34890
	I0108 21:27:21.354917  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:21.354924  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:21.354931  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:21.354949  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:21.354956  355334 round_trippers.go:580]     Content-Length: 4083
	I0108 21:27:21.354964  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:21 GMT
	I0108 21:27:21.355047  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"495","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I0108 21:27:21.852139  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:21.852169  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:21.852178  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:21.852184  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:21.855700  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:21.855738  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:21.855751  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:21.855761  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:21.855769  355334 round_trippers.go:580]     Content-Length: 4083
	I0108 21:27:21.855777  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:21 GMT
	I0108 21:27:21.855789  355334 round_trippers.go:580]     Audit-Id: f317abc4-ca6c-483a-b12d-b8cb0213a384
	I0108 21:27:21.855798  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:21.855807  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:21.855929  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"495","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I0108 21:27:22.352475  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:22.352504  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:22.352513  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:22.352519  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:22.355488  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:22.355519  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:22.355528  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:22 GMT
	I0108 21:27:22.355534  355334 round_trippers.go:580]     Audit-Id: bf4f4671-cf96-458c-bf03-a0008a088322
	I0108 21:27:22.355539  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:22.355544  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:22.355549  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:22.355555  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:22.355560  355334 round_trippers.go:580]     Content-Length: 4083
	I0108 21:27:22.355606  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"495","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I0108 21:27:22.355852  355334 node_ready.go:58] node "multinode-962345-m02" has status "Ready":"False"
	I0108 21:27:22.852140  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:22.852164  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:22.852178  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:22.852185  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:22.854784  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:22.854807  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:22.854817  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:22 GMT
	I0108 21:27:22.854825  355334 round_trippers.go:580]     Audit-Id: 56c59400-f00f-46f9-afe1-7f3f1f02fb45
	I0108 21:27:22.854833  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:22.854842  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:22.854853  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:22.854866  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:22.854876  355334 round_trippers.go:580]     Content-Length: 4083
	I0108 21:27:22.854963  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"495","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I0108 21:27:23.352377  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:23.352406  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:23.352417  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:23.352432  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:23.354962  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:23.354992  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:23.354999  355334 round_trippers.go:580]     Audit-Id: e231bd6c-d4f3-45ed-8f7c-f326187db6c5
	I0108 21:27:23.355005  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:23.355010  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:23.355015  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:23.355020  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:23.355025  355334 round_trippers.go:580]     Content-Length: 4083
	I0108 21:27:23.355033  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:23 GMT
	I0108 21:27:23.355472  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"495","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I0108 21:27:23.852624  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:23.852656  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:23.852666  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:23.852674  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:23.856116  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:23.856144  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:23.856155  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:23.856165  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:23 GMT
	I0108 21:27:23.856173  355334 round_trippers.go:580]     Audit-Id: 5e8d436a-6991-4451-a907-fcbc5ba567fa
	I0108 21:27:23.856178  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:23.856183  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:23.856188  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:23.856297  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"505","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I0108 21:27:24.351840  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:24.351871  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:24.351882  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:24.351890  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:24.354572  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:24.354594  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:24.354604  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:24.354613  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:24.354622  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:24.354631  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:24 GMT
	I0108 21:27:24.354641  355334 round_trippers.go:580]     Audit-Id: 79da39d8-2625-4ff1-8b20-e1dad22a9d00
	I0108 21:27:24.354647  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:24.354796  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"505","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I0108 21:27:24.852517  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:24.852551  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:24.852559  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:24.852565  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:24.855429  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:24.855453  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:24.855460  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:24 GMT
	I0108 21:27:24.855465  355334 round_trippers.go:580]     Audit-Id: 6cd76038-0d9e-41a6-89e1-d5e0e3f2e18d
	I0108 21:27:24.855471  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:24.855475  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:24.855481  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:24.855486  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:24.855818  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"505","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I0108 21:27:24.856082  355334 node_ready.go:58] node "multinode-962345-m02" has status "Ready":"False"
	I0108 21:27:25.352308  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:25.352333  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:25.352342  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:25.352348  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:25.354833  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:25.354860  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:25.354871  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:25.354882  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:25.354891  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:25 GMT
	I0108 21:27:25.354900  355334 round_trippers.go:580]     Audit-Id: 3cf202ec-cc5b-43e5-8171-ceaf9fc04aeb
	I0108 21:27:25.354908  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:25.354916  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:25.355060  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"505","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I0108 21:27:25.852719  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:25.852751  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:25.852760  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:25.852766  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:25.856131  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:25.856158  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:25.856166  355334 round_trippers.go:580]     Audit-Id: 38f860a5-a1fc-4710-a900-d8158179c0db
	I0108 21:27:25.856171  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:25.856177  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:25.856182  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:25.856187  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:25.856193  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:25 GMT
	I0108 21:27:25.856986  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"505","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I0108 21:27:26.352577  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:26.352602  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:26.352610  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:26.352617  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:26.355449  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:26.355473  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:26.355483  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:26.355490  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:26.355505  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:26.355514  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:26 GMT
	I0108 21:27:26.355523  355334 round_trippers.go:580]     Audit-Id: 59c88b5a-21e7-4426-a861-8f8f90c3b594
	I0108 21:27:26.355537  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:26.355995  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"505","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I0108 21:27:26.852467  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:26.852502  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:26.852511  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:26.852518  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:26.855747  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:26.855771  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:26.855777  355334 round_trippers.go:580]     Audit-Id: 738b4b63-d75f-4880-8230-493ead4fc849
	I0108 21:27:26.855783  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:26.855788  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:26.855797  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:26.855805  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:26.855813  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:26 GMT
	I0108 21:27:26.856303  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"505","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I0108 21:27:26.856575  355334 node_ready.go:58] node "multinode-962345-m02" has status "Ready":"False"
	I0108 21:27:27.352645  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:27.352668  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.352677  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.352683  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.355308  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:27.355329  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.355336  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.355341  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.355351  355334 round_trippers.go:580]     Audit-Id: 528bcc00-1fa8-4105-bcb4-e2af9b586286
	I0108 21:27:27.355375  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.355384  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.355396  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.355645  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"505","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3168 chars]
	I0108 21:27:27.852297  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:27.852322  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.852330  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.852336  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.854588  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:27.854614  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.854625  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.854633  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.854647  355334 round_trippers.go:580]     Audit-Id: b505e51b-a2b3-4eb5-bbdc-c340fdc1d5f0
	I0108 21:27:27.854655  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.854662  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.854674  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.855179  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"517","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3254 chars]
	I0108 21:27:27.855549  355334 node_ready.go:49] node "multinode-962345-m02" has status "Ready":"True"
	I0108 21:27:27.855581  355334 node_ready.go:38] duration metric: took 7.503962183s waiting for node "multinode-962345-m02" to be "Ready" ...
	I0108 21:27:27.855594  355334 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:27:27.855677  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:27:27.855689  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.855700  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.855712  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.859944  355334 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:27:27.859984  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.859994  355334 round_trippers.go:580]     Audit-Id: bb801ec7-5592-45fe-beee-147b25084612
	I0108 21:27:27.860003  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.860011  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.860019  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.860027  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.860035  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.861459  355334 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"517"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"439","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67404 chars]
	I0108 21:27:27.864311  355334 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:27.864400  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:27:27.864412  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.864423  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.864436  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.866453  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:27:27.866474  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.866484  355334 round_trippers.go:580]     Audit-Id: ffaaec5d-c7f0-4c9c-9bff-503d7c0d0b6d
	I0108 21:27:27.866493  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.866501  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.866508  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.866519  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.866527  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.866863  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"439","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 21:27:27.867376  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:27:27.867391  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.867402  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.867411  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.869115  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:27:27.869134  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.869144  355334 round_trippers.go:580]     Audit-Id: 79d64eec-ab97-484f-af4a-bea6d5503d4b
	I0108 21:27:27.869153  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.869159  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.869164  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.869173  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.869178  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.869365  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:27:27.869724  355334 pod_ready.go:92] pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:27.869742  355334 pod_ready.go:81] duration metric: took 5.406344ms waiting for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:27.869750  355334 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:27.869792  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-962345
	I0108 21:27:27.869800  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.869806  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.869812  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.871642  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:27:27.871661  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.871669  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.871685  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.871694  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.871701  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.871709  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.871717  355334 round_trippers.go:580]     Audit-Id: 1381ac08-ae59-4d67-970f-4d1e160c03af
	I0108 21:27:27.871990  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-962345","namespace":"kube-system","uid":"44773ce7-5393-4178-a985-d8bf216f88f1","resourceVersion":"325","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.239:2379","kubernetes.io/config.hash":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.mirror":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.seen":"2024-01-08T21:26:26.755438257Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 21:27:27.872382  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:27:27.872396  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.872403  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.872409  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.874011  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:27:27.874032  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.874041  355334 round_trippers.go:580]     Audit-Id: a4169ced-8e65-4dee-a5eb-b139bc76eba8
	I0108 21:27:27.874049  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.874056  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.874064  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.874073  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.874083  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.874217  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:27:27.874550  355334 pod_ready.go:92] pod "etcd-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:27.874561  355334 pod_ready.go:81] duration metric: took 4.806583ms waiting for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:27.874573  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:27.874616  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-962345
	I0108 21:27:27.874620  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.874627  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.874633  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.876338  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:27:27.876359  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.876367  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.876376  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.876384  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.876392  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.876401  355334 round_trippers.go:580]     Audit-Id: 5ce41190-066f-432f-9349-816cf082d167
	I0108 21:27:27.876409  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.876602  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-962345","namespace":"kube-system","uid":"bea03251-08df-4434-bc4a-36ef454e151e","resourceVersion":"331","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.239:8443","kubernetes.io/config.hash":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.mirror":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.seen":"2024-01-08T21:26:26.755439577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 21:27:27.877079  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:27:27.877096  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.877107  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.877117  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.878854  355334 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:27:27.878875  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.878886  355334 round_trippers.go:580]     Audit-Id: 0cb73ad4-b419-43e9-bbe9-d6cb8418458f
	I0108 21:27:27.878897  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.878909  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.878921  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.878931  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.878943  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.879086  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:27:27.879472  355334 pod_ready.go:92] pod "kube-apiserver-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:27.879495  355334 pod_ready.go:81] duration metric: took 4.915209ms waiting for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:27.879507  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:27.879568  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-962345
	I0108 21:27:27.879578  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.879588  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.879600  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.881623  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:27.881645  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.881656  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.881665  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.881671  355334 round_trippers.go:580]     Audit-Id: 5e5be780-52cb-4ea7-b1ff-53a86050e3a4
	I0108 21:27:27.881676  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.881681  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.881686  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.881806  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-962345","namespace":"kube-system","uid":"80b86d62-83f0-4550-988f-6255409d39da","resourceVersion":"308","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.mirror":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.seen":"2024-01-08T21:26:26.755427365Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 21:27:27.882236  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:27:27.882253  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:27.882264  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:27.882273  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:27.884794  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:27.884812  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:27.884822  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:27.884832  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:27.884839  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:27.884844  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:27 GMT
	I0108 21:27:27.884849  355334 round_trippers.go:580]     Audit-Id: 45b9a5b8-1642-48c1-8158-5411be5c2500
	I0108 21:27:27.884857  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:27.884961  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:27:27.885307  355334 pod_ready.go:92] pod "kube-controller-manager-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:27.885324  355334 pod_ready.go:81] duration metric: took 5.808085ms waiting for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:27.885337  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:28.052786  355334 request.go:629] Waited for 167.365165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:27:28.052850  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:27:28.052855  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:28.052863  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:28.052869  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:28.055850  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:28.055874  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:28.055884  355334 round_trippers.go:580]     Audit-Id: 0a0b2032-0e60-419c-bca9-a525bfcc1518
	I0108 21:27:28.055894  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:28.055903  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:28.055912  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:28.055922  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:28.055930  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:28 GMT
	I0108 21:27:28.056049  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2c2t6","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e","resourceVersion":"506","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 21:27:28.252558  355334 request.go:629] Waited for 195.98769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:28.252646  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:27:28.252654  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:28.252669  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:28.252683  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:28.254988  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:28.255016  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:28.255027  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:28.255034  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:28.255043  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:28 GMT
	I0108 21:27:28.255051  355334 round_trippers.go:580]     Audit-Id: 5f8e4485-1c29-49a3-9c59-2a0f8f70dbe9
	I0108 21:27:28.255059  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:28.255068  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:28.255163  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"517","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_27_19_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3254 chars]
	I0108 21:27:28.255547  355334 pod_ready.go:92] pod "kube-proxy-2c2t6" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:28.255577  355334 pod_ready.go:81] duration metric: took 370.231043ms waiting for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:28.255592  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:28.452903  355334 request.go:629] Waited for 197.206687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:27:28.452979  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:27:28.452984  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:28.452991  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:28.452997  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:28.455653  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:28.455685  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:28.455696  355334 round_trippers.go:580]     Audit-Id: 18aea2a3-3367-48d4-9451-14eba0f9ad9e
	I0108 21:27:28.455705  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:28.455713  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:28.455722  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:28.455736  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:28.455743  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:28 GMT
	I0108 21:27:28.455943  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmjzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"fbfa39a4-ba62-4e31-8126-9a320311e846","resourceVersion":"409","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 21:27:28.652928  355334 request.go:629] Waited for 196.395573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:27:28.653020  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:27:28.653029  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:28.653042  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:28.653057  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:28.656260  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:28.656291  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:28.656307  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:28.656319  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:28.656328  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:28.656337  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:28 GMT
	I0108 21:27:28.656349  355334 round_trippers.go:580]     Audit-Id: 1630494b-c178-414d-a321-b04409cf2f59
	I0108 21:27:28.656356  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:28.656641  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:27:28.656978  355334 pod_ready.go:92] pod "kube-proxy-bmjzs" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:28.656996  355334 pod_ready.go:81] duration metric: took 401.388712ms waiting for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:28.657006  355334 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:28.852717  355334 request.go:629] Waited for 195.62171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:27:28.852788  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:27:28.852793  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:28.852801  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:28.852807  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:28.855806  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:28.855839  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:28.855850  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:28.855859  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:28 GMT
	I0108 21:27:28.855866  355334 round_trippers.go:580]     Audit-Id: 8014bc27-9d2d-465b-bedd-f4b0c8d1e005
	I0108 21:27:28.855875  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:28.855884  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:28.855891  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:28.856395  355334 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-962345","namespace":"kube-system","uid":"3778c0a4-1528-4336-9f02-b77a2a6a1c34","resourceVersion":"306","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.mirror":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.seen":"2024-01-08T21:26:26.755431609Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 21:27:29.053235  355334 request.go:629] Waited for 196.367252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:27:29.053312  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:27:29.053317  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:29.053325  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:29.053332  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:29.056207  355334 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:27:29.056234  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:29.056241  355334 round_trippers.go:580]     Audit-Id: be6da148-8888-47ee-8422-739936820a71
	I0108 21:27:29.056246  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:29.056252  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:29.056259  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:29.056268  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:29.056283  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:29 GMT
	I0108 21:27:29.056516  355334 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 21:27:29.056882  355334 pod_ready.go:92] pod "kube-scheduler-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:27:29.056903  355334 pod_ready.go:81] duration metric: took 399.890813ms waiting for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:27:29.056913  355334 pod_ready.go:38] duration metric: took 1.201307604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:27:29.056931  355334 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:27:29.056977  355334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:27:29.071231  355334 system_svc.go:56] duration metric: took 14.286766ms WaitForService to wait for kubelet.
	I0108 21:27:29.071263  355334 kubeadm.go:581] duration metric: took 8.748735391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:27:29.071292  355334 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:27:29.252744  355334 request.go:629] Waited for 181.362444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes
	I0108 21:27:29.252858  355334 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes
	I0108 21:27:29.252865  355334 round_trippers.go:469] Request Headers:
	I0108 21:27:29.252875  355334 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:27:29.252888  355334 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:27:29.256007  355334 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:27:29.256031  355334 round_trippers.go:577] Response Headers:
	I0108 21:27:29.256038  355334 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:27:29 GMT
	I0108 21:27:29.256044  355334 round_trippers.go:580]     Audit-Id: 458a48ef-38fa-459f-beeb-0c46dcc32f65
	I0108 21:27:29.256049  355334 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:27:29.256054  355334 round_trippers.go:580]     Content-Type: application/json
	I0108 21:27:29.256059  355334 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:27:29.256064  355334 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:27:29.256220  355334 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"419","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10078 chars]
	I0108 21:27:29.256708  355334 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:27:29.256731  355334 node_conditions.go:123] node cpu capacity is 2
	I0108 21:27:29.256741  355334 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:27:29.256745  355334 node_conditions.go:123] node cpu capacity is 2
	I0108 21:27:29.256749  355334 node_conditions.go:105] duration metric: took 185.45175ms to run NodePressure ...
	I0108 21:27:29.256760  355334 start.go:228] waiting for startup goroutines ...
	I0108 21:27:29.256785  355334 start.go:242] writing updated cluster config ...
	I0108 21:27:29.257081  355334 ssh_runner.go:195] Run: rm -f paused
	I0108 21:27:29.305171  355334 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:27:29.307349  355334 out.go:177] * Done! kubectl is now configured to use "multinode-962345" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:25:55 UTC, ends at Mon 2024-01-08 21:27:35 UTC. --
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.586933760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704749255586920124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=554e32a7-f464-45fc-97ab-9d556dd72a20 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.587718811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d421be8b-0ee8-495a-a497-378811e18a99 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.587788719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d421be8b-0ee8-495a-a497-378811e18a99 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.587988995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66ab0843a6f6ff05bd61e4b46a548b46fcde1eb89b283c7d5259995893a5bcac,PodSandboxId:54fb1c94d0a799e1fc5f09e472922de2e8ae87a6f4e3f4994d752e465bcf69b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704749251780573973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-wmznk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84ab7957-5a65-40e2-a54b-138c6c0894f5,},Annotations:map[string]string{io.kubernetes.container.hash: 47861c95,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f514b9f7db46f03d93efb10aa6081beb6a735ff79847159366828357f08c254e,PodSandboxId:afc00b3166b887dc5b23b0d4acee8b0a394bf58124042b47cfe16413e18e663f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749205938088039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v6dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1edff2-3b29-4045-b7b9-935c47115d16,},Annotations:map[string]string{io.kubernetes.container.hash: badac4da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a1b26d2a314cd7c2e6ae259f5dfe696999f37bffd64a9f1df3e5ce66e6375b,PodSandboxId:8fdac79f6efe7c11d9126d9e30ff59ee788931a1ab5c705c9561cae23970c627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749205701603799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a10085550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93460e758f3e5d07e3a9026668488bc41a4dcdc502b820b45307734772700b1f,PodSandboxId:86cd121bda510b4f19198dd30acd7b89fe0d8aaa22524dd80ee9390fd248e114,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704749203002050508,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5w9nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b84fc0ee-c9b1-4e6c-b066-536f2fd56d52,},Annotations:map[string]string{io.kubernetes.container.hash: d5b65b0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86347109265ff5b6e0b7c7d88a5374fd88079ddfbe4e148069a3e76cf3707a2d,PodSandboxId:b7690830550aef5cebdd9e105409608817a3742d4cd8bf6e3abb9949bd21118b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749200972560227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbfa39a4-ba62-4e31-8126-9a3203
11e846,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1bef98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62c1c3fba9d76174d48d7fa8349a1c8db6c0610fe0fbcf6239e4ca36f5b3964,PodSandboxId:3f9706ee90e93bc1c1fae1d0a8dedf92eb3b678eb63424f5157793dd10364a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749179778473803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2489a6d3116ba4abcb5fd745efd3a4,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788985e5b730285122017e8c1ea922dacf536b244a0df55563d8abd8c82ea812,PodSandboxId:f9224348b6bbffb1638b3703a3a06ff589c58be23982afafc9c97cffeb817a9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749179533259435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6fd29cfc92d55a7ce4e2f96974ea73,},Annotations:map[string]string{io.kubernetes.container.h
ash: 22b676a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cb635e39312d50505cfff1be1008bece6fafb6e9925716e158af68a054ed36,PodSandboxId:5b958c4234c44dc8383ea3c3b511be009aed7e43360d30f8ae01aae1c9c9eb54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749179350084905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f90f3600544be0f17e2e088ab14d51,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72dfd688d628935b061a523a4c79256b249f57b06a8f0eb669951cf8fec000b,PodSandboxId:e3765aa9643509d099a1437260a4d03107c276e20d394ac7c3ca1c0443e6bf77,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749179181586753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dbed9a3f64fb2ec41dcc39fae30b654,},Annotations:map[string]string{io.kubernetes.
container.hash: 671fa91a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d421be8b-0ee8-495a-a497-378811e18a99 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.626736674Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dd291478-16a0-4674-b266-93494b76bc65 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.626809460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dd291478-16a0-4674-b266-93494b76bc65 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.628166569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=777319a6-20d6-4f3a-9c02-49ca714560e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.628695224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704749255628682255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=777319a6-20d6-4f3a-9c02-49ca714560e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.629313969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1e496716-7f84-4570-983c-8ca96d3c3502 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.629386717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1e496716-7f84-4570-983c-8ca96d3c3502 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.629608491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66ab0843a6f6ff05bd61e4b46a548b46fcde1eb89b283c7d5259995893a5bcac,PodSandboxId:54fb1c94d0a799e1fc5f09e472922de2e8ae87a6f4e3f4994d752e465bcf69b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704749251780573973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-wmznk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84ab7957-5a65-40e2-a54b-138c6c0894f5,},Annotations:map[string]string{io.kubernetes.container.hash: 47861c95,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f514b9f7db46f03d93efb10aa6081beb6a735ff79847159366828357f08c254e,PodSandboxId:afc00b3166b887dc5b23b0d4acee8b0a394bf58124042b47cfe16413e18e663f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749205938088039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v6dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1edff2-3b29-4045-b7b9-935c47115d16,},Annotations:map[string]string{io.kubernetes.container.hash: badac4da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a1b26d2a314cd7c2e6ae259f5dfe696999f37bffd64a9f1df3e5ce66e6375b,PodSandboxId:8fdac79f6efe7c11d9126d9e30ff59ee788931a1ab5c705c9561cae23970c627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749205701603799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a10085550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93460e758f3e5d07e3a9026668488bc41a4dcdc502b820b45307734772700b1f,PodSandboxId:86cd121bda510b4f19198dd30acd7b89fe0d8aaa22524dd80ee9390fd248e114,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704749203002050508,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5w9nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b84fc0ee-c9b1-4e6c-b066-536f2fd56d52,},Annotations:map[string]string{io.kubernetes.container.hash: d5b65b0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86347109265ff5b6e0b7c7d88a5374fd88079ddfbe4e148069a3e76cf3707a2d,PodSandboxId:b7690830550aef5cebdd9e105409608817a3742d4cd8bf6e3abb9949bd21118b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749200972560227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbfa39a4-ba62-4e31-8126-9a3203
11e846,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1bef98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62c1c3fba9d76174d48d7fa8349a1c8db6c0610fe0fbcf6239e4ca36f5b3964,PodSandboxId:3f9706ee90e93bc1c1fae1d0a8dedf92eb3b678eb63424f5157793dd10364a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749179778473803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2489a6d3116ba4abcb5fd745efd3a4,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788985e5b730285122017e8c1ea922dacf536b244a0df55563d8abd8c82ea812,PodSandboxId:f9224348b6bbffb1638b3703a3a06ff589c58be23982afafc9c97cffeb817a9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749179533259435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6fd29cfc92d55a7ce4e2f96974ea73,},Annotations:map[string]string{io.kubernetes.container.h
ash: 22b676a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cb635e39312d50505cfff1be1008bece6fafb6e9925716e158af68a054ed36,PodSandboxId:5b958c4234c44dc8383ea3c3b511be009aed7e43360d30f8ae01aae1c9c9eb54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749179350084905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f90f3600544be0f17e2e088ab14d51,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72dfd688d628935b061a523a4c79256b249f57b06a8f0eb669951cf8fec000b,PodSandboxId:e3765aa9643509d099a1437260a4d03107c276e20d394ac7c3ca1c0443e6bf77,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749179181586753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dbed9a3f64fb2ec41dcc39fae30b654,},Annotations:map[string]string{io.kubernetes.
container.hash: 671fa91a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1e496716-7f84-4570-983c-8ca96d3c3502 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.672652617Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3aa7cb05-4f56-4296-a81a-d1d4965bcdbc name=/runtime.v1.RuntimeService/Version
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.672734264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3aa7cb05-4f56-4296-a81a-d1d4965bcdbc name=/runtime.v1.RuntimeService/Version
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.673879081Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=71f7672b-0891-4252-829a-d8ac31c9ab08 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.674328370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704749255674314674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=71f7672b-0891-4252-829a-d8ac31c9ab08 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.674955655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ed30cb8-9440-4be8-b5c3-798dfe6ca272 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.675025797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ed30cb8-9440-4be8-b5c3-798dfe6ca272 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.675305290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66ab0843a6f6ff05bd61e4b46a548b46fcde1eb89b283c7d5259995893a5bcac,PodSandboxId:54fb1c94d0a799e1fc5f09e472922de2e8ae87a6f4e3f4994d752e465bcf69b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704749251780573973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-wmznk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84ab7957-5a65-40e2-a54b-138c6c0894f5,},Annotations:map[string]string{io.kubernetes.container.hash: 47861c95,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f514b9f7db46f03d93efb10aa6081beb6a735ff79847159366828357f08c254e,PodSandboxId:afc00b3166b887dc5b23b0d4acee8b0a394bf58124042b47cfe16413e18e663f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749205938088039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v6dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1edff2-3b29-4045-b7b9-935c47115d16,},Annotations:map[string]string{io.kubernetes.container.hash: badac4da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a1b26d2a314cd7c2e6ae259f5dfe696999f37bffd64a9f1df3e5ce66e6375b,PodSandboxId:8fdac79f6efe7c11d9126d9e30ff59ee788931a1ab5c705c9561cae23970c627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749205701603799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a10085550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93460e758f3e5d07e3a9026668488bc41a4dcdc502b820b45307734772700b1f,PodSandboxId:86cd121bda510b4f19198dd30acd7b89fe0d8aaa22524dd80ee9390fd248e114,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704749203002050508,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5w9nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b84fc0ee-c9b1-4e6c-b066-536f2fd56d52,},Annotations:map[string]string{io.kubernetes.container.hash: d5b65b0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86347109265ff5b6e0b7c7d88a5374fd88079ddfbe4e148069a3e76cf3707a2d,PodSandboxId:b7690830550aef5cebdd9e105409608817a3742d4cd8bf6e3abb9949bd21118b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749200972560227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbfa39a4-ba62-4e31-8126-9a3203
11e846,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1bef98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62c1c3fba9d76174d48d7fa8349a1c8db6c0610fe0fbcf6239e4ca36f5b3964,PodSandboxId:3f9706ee90e93bc1c1fae1d0a8dedf92eb3b678eb63424f5157793dd10364a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749179778473803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2489a6d3116ba4abcb5fd745efd3a4,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788985e5b730285122017e8c1ea922dacf536b244a0df55563d8abd8c82ea812,PodSandboxId:f9224348b6bbffb1638b3703a3a06ff589c58be23982afafc9c97cffeb817a9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749179533259435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6fd29cfc92d55a7ce4e2f96974ea73,},Annotations:map[string]string{io.kubernetes.container.h
ash: 22b676a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cb635e39312d50505cfff1be1008bece6fafb6e9925716e158af68a054ed36,PodSandboxId:5b958c4234c44dc8383ea3c3b511be009aed7e43360d30f8ae01aae1c9c9eb54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749179350084905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f90f3600544be0f17e2e088ab14d51,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72dfd688d628935b061a523a4c79256b249f57b06a8f0eb669951cf8fec000b,PodSandboxId:e3765aa9643509d099a1437260a4d03107c276e20d394ac7c3ca1c0443e6bf77,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749179181586753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dbed9a3f64fb2ec41dcc39fae30b654,},Annotations:map[string]string{io.kubernetes.
container.hash: 671fa91a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ed30cb8-9440-4be8-b5c3-798dfe6ca272 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.712449893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eb6c6efc-b10f-4cc4-a486-1d5737f6d0b6 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.712536775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eb6c6efc-b10f-4cc4-a486-1d5737f6d0b6 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.716347262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c3119784-63f1-4b8f-8e77-990f6c72d900 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.716731470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704749255716717855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c3119784-63f1-4b8f-8e77-990f6c72d900 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.717143405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a1c18a1e-c1af-4f31-9297-6b7f634bb56f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.717267006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a1c18a1e-c1af-4f31-9297-6b7f634bb56f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:27:35 multinode-962345 crio[715]: time="2024-01-08 21:27:35.717462502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66ab0843a6f6ff05bd61e4b46a548b46fcde1eb89b283c7d5259995893a5bcac,PodSandboxId:54fb1c94d0a799e1fc5f09e472922de2e8ae87a6f4e3f4994d752e465bcf69b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704749251780573973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-wmznk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84ab7957-5a65-40e2-a54b-138c6c0894f5,},Annotations:map[string]string{io.kubernetes.container.hash: 47861c95,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f514b9f7db46f03d93efb10aa6081beb6a735ff79847159366828357f08c254e,PodSandboxId:afc00b3166b887dc5b23b0d4acee8b0a394bf58124042b47cfe16413e18e663f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749205938088039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v6dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1edff2-3b29-4045-b7b9-935c47115d16,},Annotations:map[string]string{io.kubernetes.container.hash: badac4da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a1b26d2a314cd7c2e6ae259f5dfe696999f37bffd64a9f1df3e5ce66e6375b,PodSandboxId:8fdac79f6efe7c11d9126d9e30ff59ee788931a1ab5c705c9561cae23970c627,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749205701603799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a10085550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93460e758f3e5d07e3a9026668488bc41a4dcdc502b820b45307734772700b1f,PodSandboxId:86cd121bda510b4f19198dd30acd7b89fe0d8aaa22524dd80ee9390fd248e114,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704749203002050508,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5w9nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b84fc0ee-c9b1-4e6c-b066-536f2fd56d52,},Annotations:map[string]string{io.kubernetes.container.hash: d5b65b0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86347109265ff5b6e0b7c7d88a5374fd88079ddfbe4e148069a3e76cf3707a2d,PodSandboxId:b7690830550aef5cebdd9e105409608817a3742d4cd8bf6e3abb9949bd21118b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749200972560227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbfa39a4-ba62-4e31-8126-9a3203
11e846,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1bef98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62c1c3fba9d76174d48d7fa8349a1c8db6c0610fe0fbcf6239e4ca36f5b3964,PodSandboxId:3f9706ee90e93bc1c1fae1d0a8dedf92eb3b678eb63424f5157793dd10364a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749179778473803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2489a6d3116ba4abcb5fd745efd3a4,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788985e5b730285122017e8c1ea922dacf536b244a0df55563d8abd8c82ea812,PodSandboxId:f9224348b6bbffb1638b3703a3a06ff589c58be23982afafc9c97cffeb817a9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749179533259435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6fd29cfc92d55a7ce4e2f96974ea73,},Annotations:map[string]string{io.kubernetes.container.h
ash: 22b676a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cb635e39312d50505cfff1be1008bece6fafb6e9925716e158af68a054ed36,PodSandboxId:5b958c4234c44dc8383ea3c3b511be009aed7e43360d30f8ae01aae1c9c9eb54,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749179350084905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f90f3600544be0f17e2e088ab14d51,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72dfd688d628935b061a523a4c79256b249f57b06a8f0eb669951cf8fec000b,PodSandboxId:e3765aa9643509d099a1437260a4d03107c276e20d394ac7c3ca1c0443e6bf77,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749179181586753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dbed9a3f64fb2ec41dcc39fae30b654,},Annotations:map[string]string{io.kubernetes.
container.hash: 671fa91a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a1c18a1e-c1af-4f31-9297-6b7f634bb56f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	66ab0843a6f6f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   54fb1c94d0a79       busybox-5bc68d56bd-wmznk
	f514b9f7db46f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      49 seconds ago       Running             coredns                   0                   afc00b3166b88       coredns-5dd5756b68-v6dmd
	f4a1b26d2a314       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      50 seconds ago       Running             storage-provisioner       0                   8fdac79f6efe7       storage-provisioner
	93460e758f3e5       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      52 seconds ago       Running             kindnet-cni               0                   86cd121bda510       kindnet-5w9nh
	86347109265ff       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      54 seconds ago       Running             kube-proxy                0                   b7690830550ae       kube-proxy-bmjzs
	a62c1c3fba9d7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   3f9706ee90e93       kube-scheduler-multinode-962345
	788985e5b7302       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   f9224348b6bbf       etcd-multinode-962345
	e4cb635e39312       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   5b958c4234c44       kube-controller-manager-multinode-962345
	e72dfd688d628       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   e3765aa964350       kube-apiserver-multinode-962345
	
	
	==> coredns [f514b9f7db46f03d93efb10aa6081beb6a735ff79847159366828357f08c254e] <==
	[INFO] 10.244.1.2:40248 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014353s
	[INFO] 10.244.0.3:49894 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103743s
	[INFO] 10.244.0.3:43745 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001661052s
	[INFO] 10.244.0.3:41802 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005767s
	[INFO] 10.244.0.3:58152 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000029166s
	[INFO] 10.244.0.3:56212 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00123119s
	[INFO] 10.244.0.3:55475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000031293s
	[INFO] 10.244.0.3:59025 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046779s
	[INFO] 10.244.0.3:47549 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000033619s
	[INFO] 10.244.1.2:49418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146355s
	[INFO] 10.244.1.2:38126 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136159s
	[INFO] 10.244.1.2:52311 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109142s
	[INFO] 10.244.1.2:58341 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098292s
	[INFO] 10.244.0.3:54338 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000314757s
	[INFO] 10.244.0.3:56658 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095448s
	[INFO] 10.244.0.3:36232 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000224206s
	[INFO] 10.244.0.3:36931 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072059s
	[INFO] 10.244.1.2:49556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000312078s
	[INFO] 10.244.1.2:58531 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000246785s
	[INFO] 10.244.1.2:58393 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000399198s
	[INFO] 10.244.1.2:36785 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174453s
	[INFO] 10.244.0.3:33758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135841s
	[INFO] 10.244.0.3:50808 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000281087s
	[INFO] 10.244.0.3:47161 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000965s
	[INFO] 10.244.0.3:56741 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143418s
	
	
	==> describe nodes <==
	Name:               multinode-962345
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-962345
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-962345
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_26_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:26:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-962345
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:27:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:26:44 +0000   Mon, 08 Jan 2024 21:26:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:26:44 +0000   Mon, 08 Jan 2024 21:26:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:26:44 +0000   Mon, 08 Jan 2024 21:26:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:26:44 +0000   Mon, 08 Jan 2024 21:26:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    multinode-962345
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2493634cfb3e4223bbb0128883aa3ce6
	  System UUID:                2493634c-fb3e-4223-bbb0-128883aa3ce6
	  Boot ID:                    bc4657d8-a401-42e5-b3b9-6386d05b0600
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wmznk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-v6dmd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     56s
	  kube-system                 etcd-multinode-962345                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kindnet-5w9nh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      56s
	  kube-system                 kube-apiserver-multinode-962345             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-controller-manager-multinode-962345    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-proxy-bmjzs                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-multinode-962345             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node multinode-962345 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node multinode-962345 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node multinode-962345 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s                kubelet          Node multinode-962345 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s                kubelet          Node multinode-962345 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s                kubelet          Node multinode-962345 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           57s                node-controller  Node multinode-962345 event: Registered Node multinode-962345 in Controller
	  Normal  NodeReady                51s                kubelet          Node multinode-962345 status is now: NodeReady
	
	
	Name:               multinode-962345-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-962345-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-962345
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_27_19_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:27:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-962345-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:27:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:27:27 +0000   Mon, 08 Jan 2024 21:27:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:27:27 +0000   Mon, 08 Jan 2024 21:27:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:27:27 +0000   Mon, 08 Jan 2024 21:27:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:27:27 +0000   Mon, 08 Jan 2024 21:27:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    multinode-962345-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d203a689d1184325a914612dcf629058
	  System UUID:                d203a689-d118-4325-a914-612dcf629058
	  Boot ID:                    7b05427d-a8d8-442e-9294-a598d1ded15b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-qwxd6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-mvv2x               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16s
	  kube-system                 kube-proxy-2c2t6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  16s (x5 over 18s)  kubelet          Node multinode-962345-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x5 over 18s)  kubelet          Node multinode-962345-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s (x5 over 18s)  kubelet          Node multinode-962345-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                node-controller  Node multinode-962345-m02 event: Registered Node multinode-962345-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-962345-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan 8 21:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065078] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.354358] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.540394] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150556] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan 8 21:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.717132] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.108859] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.148657] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.111584] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.199786] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +9.616281] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +9.261930] systemd-fstab-generator[1251]: Ignoring "noauto" for root device
	[ +20.648276] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [788985e5b730285122017e8c1ea922dacf536b244a0df55563d8abd8c82ea812] <==
	{"level":"info","ts":"2024-01-08T21:26:21.434899Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"de5b23b13807dd2","initial-advertise-peer-urls":["https://192.168.39.239:2380"],"listen-peer-urls":["https://192.168.39.239:2380"],"advertise-client-urls":["https://192.168.39.239:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.239:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T21:26:21.435013Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:26:22.074309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T21:26:22.074459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T21:26:22.074524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 received MsgPreVoteResp from de5b23b13807dd2 at term 1"}
	{"level":"info","ts":"2024-01-08T21:26:22.074562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:26:22.074586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 received MsgVoteResp from de5b23b13807dd2 at term 2"}
	{"level":"info","ts":"2024-01-08T21:26:22.074617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T21:26:22.074643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: de5b23b13807dd2 elected leader de5b23b13807dd2 at term 2"}
	{"level":"info","ts":"2024-01-08T21:26:22.07606Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:26:22.077283Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"de5b23b13807dd2","local-member-attributes":"{Name:multinode-962345 ClientURLs:[https://192.168.39.239:2379]}","request-path":"/0/members/de5b23b13807dd2/attributes","cluster-id":"9f81b65ca2cd0829","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:26:22.077408Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:26:22.078035Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f81b65ca2cd0829","local-member-id":"de5b23b13807dd2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:26:22.07813Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:26:22.078166Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:26:22.07887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.239:2379"}
	{"level":"info","ts":"2024-01-08T21:26:22.07911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:26:22.079998Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:26:22.081456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:26:22.0815Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:27:24.044621Z","caller":"traceutil/trace.go:171","msg":"trace[1621432173] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"170.903341ms","start":"2024-01-08T21:27:23.873685Z","end":"2024-01-08T21:27:24.044589Z","steps":["trace[1621432173] 'process raft request'  (duration: 170.76534ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:27:24.211669Z","caller":"traceutil/trace.go:171","msg":"trace[1317719222] linearizableReadLoop","detail":"{readStateIndex:527; appliedIndex:526; }","duration":"118.152222ms","start":"2024-01-08T21:27:24.093503Z","end":"2024-01-08T21:27:24.211655Z","steps":["trace[1317719222] 'read index received'  (duration: 112.954095ms)","trace[1317719222] 'applied index is now lower than readState.Index'  (duration: 5.197486ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:27:24.211926Z","caller":"traceutil/trace.go:171","msg":"trace[2007258846] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"160.471064ms","start":"2024-01-08T21:27:24.051447Z","end":"2024-01-08T21:27:24.211918Z","steps":["trace[2007258846] 'process raft request'  (duration: 155.060487ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:27:24.212267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.666148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-08T21:27:24.212348Z","caller":"traceutil/trace.go:171","msg":"trace[227267971] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:507; }","duration":"118.854329ms","start":"2024-01-08T21:27:24.09348Z","end":"2024-01-08T21:27:24.212334Z","steps":["trace[227267971] 'agreement among raft nodes before linearized reading'  (duration: 118.630134ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:27:36 up 1 min,  0 users,  load average: 0.81, 0.31, 0.11
	Linux multinode-962345 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [93460e758f3e5d07e3a9026668488bc41a4dcdc502b820b45307734772700b1f] <==
	I0108 21:26:43.755363       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 21:26:43.755446       1 main.go:107] hostIP = 192.168.39.239
	podIP = 192.168.39.239
	I0108 21:26:43.755662       1 main.go:116] setting mtu 1500 for CNI 
	I0108 21:26:43.755701       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 21:26:43.755722       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 21:26:44.451259       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:26:44.451551       1 main.go:227] handling current node
	I0108 21:26:54.462892       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:26:54.462950       1 main.go:227] handling current node
	I0108 21:27:04.472743       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:27:04.472766       1 main.go:227] handling current node
	I0108 21:27:14.478546       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:27:14.478653       1 main.go:227] handling current node
	I0108 21:27:24.487812       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:27:24.487914       1 main.go:227] handling current node
	I0108 21:27:24.487943       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0108 21:27:24.487962       1 main.go:250] Node multinode-962345-m02 has CIDR [10.244.1.0/24] 
	I0108 21:27:24.488292       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.111 Flags: [] Table: 0} 
	I0108 21:27:34.501678       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:27:34.501877       1 main.go:227] handling current node
	I0108 21:27:34.501919       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0108 21:27:34.501951       1 main.go:250] Node multinode-962345-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e72dfd688d628935b061a523a4c79256b249f57b06a8f0eb669951cf8fec000b] <==
	I0108 21:26:23.524691       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 21:26:23.527899       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 21:26:23.527959       1 aggregator.go:166] initial CRD sync complete...
	I0108 21:26:23.527982       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 21:26:23.528004       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 21:26:23.528026       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:26:23.536667       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:26:23.536718       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:26:23.552671       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 21:26:23.591943       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:26:24.419310       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:26:24.427720       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:26:24.428495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:26:25.049511       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:26:25.145959       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:26:25.287346       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 21:26:25.299409       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.239]
	I0108 21:26:25.300422       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 21:26:25.312072       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:26:25.492088       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:26:26.635654       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:26:26.655636       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 21:26:26.670789       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 21:26:38.817578       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 21:26:39.066046       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [e4cb635e39312d50505cfff1be1008bece6fafb6e9925716e158af68a054ed36] <==
	I0108 21:26:39.868583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.426388ms"
	I0108 21:26:39.868697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.25µs"
	I0108 21:26:44.909389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="938.512µs"
	I0108 21:26:44.945037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="533.255µs"
	I0108 21:26:46.973789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.881µs"
	I0108 21:26:47.020160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.116694ms"
	I0108 21:26:47.020823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="336.462µs"
	I0108 21:26:48.362825       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0108 21:27:19.123832       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-962345-m02\" does not exist"
	I0108 21:27:19.145790       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-962345-m02" podCIDRs=["10.244.1.0/24"]
	I0108 21:27:19.148483       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2c2t6"
	I0108 21:27:19.148494       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mvv2x"
	I0108 21:27:23.368595       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-962345-m02"
	I0108 21:27:23.368680       1 event.go:307] "Event occurred" object="multinode-962345-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-962345-m02 event: Registered Node multinode-962345-m02 in Controller"
	I0108 21:27:27.835303       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-962345-m02"
	I0108 21:27:30.049983       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 21:27:30.073879       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-qwxd6"
	I0108 21:27:30.080272       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wmznk"
	I0108 21:27:30.114495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.880105ms"
	I0108 21:27:30.140506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="25.920454ms"
	I0108 21:27:30.140680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.985µs"
	I0108 21:27:31.913421       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.712591ms"
	I0108 21:27:31.913711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.532µs"
	I0108 21:27:32.137890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.278206ms"
	I0108 21:27:32.137996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="40.464µs"
	
	
	==> kube-proxy [86347109265ff5b6e0b7c7d88a5374fd88079ddfbe4e148069a3e76cf3707a2d] <==
	I0108 21:26:41.210005       1 server_others.go:69] "Using iptables proxy"
	I0108 21:26:41.225092       1 node.go:141] Successfully retrieved node IP: 192.168.39.239
	I0108 21:26:41.283127       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:26:41.283301       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:26:41.285862       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:26:41.286312       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:26:41.286501       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:26:41.286543       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:26:41.288671       1 config.go:188] "Starting service config controller"
	I0108 21:26:41.289024       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:26:41.289084       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:26:41.289090       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:26:41.291352       1 config.go:315] "Starting node config controller"
	I0108 21:26:41.291386       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:26:41.389576       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:26:41.389671       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:26:41.391613       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a62c1c3fba9d76174d48d7fa8349a1c8db6c0610fe0fbcf6239e4ca36f5b3964] <==
	W0108 21:26:23.549031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:26:23.549039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:26:23.559117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:26:23.559255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:26:24.529503       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:26:24.529554       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:26:24.530517       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:26:24.530566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:26:24.602618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:26:24.602732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:26:24.605143       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:26:24.605264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:26:24.605293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:26:24.605337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:26:24.668782       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:26:24.668909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:26:24.727456       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:26:24.727541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:26:24.806931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:26:24.807021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:26:24.835667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:26:24.835754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:26:24.846861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:26:24.846956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0108 21:26:26.825117       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:25:55 UTC, ends at Mon 2024-01-08 21:27:36 UTC. --
	Jan 08 21:26:39 multinode-962345 kubelet[1258]: I0108 21:26:39.165625    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62zdz\" (UniqueName: \"kubernetes.io/projected/b84fc0ee-c9b1-4e6c-b066-536f2fd56d52-kube-api-access-62zdz\") pod \"kindnet-5w9nh\" (UID: \"b84fc0ee-c9b1-4e6c-b066-536f2fd56d52\") " pod="kube-system/kindnet-5w9nh"
	Jan 08 21:26:39 multinode-962345 kubelet[1258]: I0108 21:26:39.165655    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b84fc0ee-c9b1-4e6c-b066-536f2fd56d52-lib-modules\") pod \"kindnet-5w9nh\" (UID: \"b84fc0ee-c9b1-4e6c-b066-536f2fd56d52\") " pod="kube-system/kindnet-5w9nh"
	Jan 08 21:26:39 multinode-962345 kubelet[1258]: E0108 21:26:39.279869    1258 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 08 21:26:39 multinode-962345 kubelet[1258]: E0108 21:26:39.279910    1258 projected.go:198] Error preparing data for projected volume kube-api-access-bpgnh for pod kube-system/kube-proxy-bmjzs: configmap "kube-root-ca.crt" not found
	Jan 08 21:26:39 multinode-962345 kubelet[1258]: E0108 21:26:39.280068    1258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fbfa39a4-ba62-4e31-8126-9a320311e846-kube-api-access-bpgnh podName:fbfa39a4-ba62-4e31-8126-9a320311e846 nodeName:}" failed. No retries permitted until 2024-01-08 21:26:39.779952527 +0000 UTC m=+13.183890229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bpgnh" (UniqueName: "kubernetes.io/projected/fbfa39a4-ba62-4e31-8126-9a320311e846-kube-api-access-bpgnh") pod "kube-proxy-bmjzs" (UID: "fbfa39a4-ba62-4e31-8126-9a320311e846") : configmap "kube-root-ca.crt" not found
	Jan 08 21:26:39 multinode-962345 kubelet[1258]: E0108 21:26:39.280907    1258 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 08 21:26:39 multinode-962345 kubelet[1258]: E0108 21:26:39.280923    1258 projected.go:198] Error preparing data for projected volume kube-api-access-62zdz for pod kube-system/kindnet-5w9nh: configmap "kube-root-ca.crt" not found
	Jan 08 21:26:39 multinode-962345 kubelet[1258]: E0108 21:26:39.280961    1258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b84fc0ee-c9b1-4e6c-b066-536f2fd56d52-kube-api-access-62zdz podName:b84fc0ee-c9b1-4e6c-b066-536f2fd56d52 nodeName:}" failed. No retries permitted until 2024-01-08 21:26:39.780947101 +0000 UTC m=+13.184884814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-62zdz" (UniqueName: "kubernetes.io/projected/b84fc0ee-c9b1-4e6c-b066-536f2fd56d52-kube-api-access-62zdz") pod "kindnet-5w9nh" (UID: "b84fc0ee-c9b1-4e6c-b066-536f2fd56d52") : configmap "kube-root-ca.crt" not found
	Jan 08 21:26:43 multinode-962345 kubelet[1258]: I0108 21:26:43.956068    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bmjzs" podStartSLOduration=4.9560096829999996 podCreationTimestamp="2024-01-08 21:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:26:41.943909186 +0000 UTC m=+15.347846911" watchObservedRunningTime="2024-01-08 21:26:43.956009683 +0000 UTC m=+17.359947386"
	Jan 08 21:26:43 multinode-962345 kubelet[1258]: I0108 21:26:43.956289    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5w9nh" podStartSLOduration=4.95626776 podCreationTimestamp="2024-01-08 21:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:26:43.953966407 +0000 UTC m=+17.357904130" watchObservedRunningTime="2024-01-08 21:26:43.95626776 +0000 UTC m=+17.360205485"
	Jan 08 21:26:44 multinode-962345 kubelet[1258]: I0108 21:26:44.857623    1258 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 21:26:44 multinode-962345 kubelet[1258]: I0108 21:26:44.905706    1258 topology_manager.go:215] "Topology Admit Handler" podUID="9c1edff2-3b29-4045-b7b9-935c47115d16" podNamespace="kube-system" podName="coredns-5dd5756b68-v6dmd"
	Jan 08 21:26:44 multinode-962345 kubelet[1258]: I0108 21:26:44.910053    1258 topology_manager.go:215] "Topology Admit Handler" podUID="da89492c-e129-462d-b84e-2f4a10085550" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 21:26:45 multinode-962345 kubelet[1258]: I0108 21:26:45.009635    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhxpp\" (UniqueName: \"kubernetes.io/projected/9c1edff2-3b29-4045-b7b9-935c47115d16-kube-api-access-mhxpp\") pod \"coredns-5dd5756b68-v6dmd\" (UID: \"9c1edff2-3b29-4045-b7b9-935c47115d16\") " pod="kube-system/coredns-5dd5756b68-v6dmd"
	Jan 08 21:26:45 multinode-962345 kubelet[1258]: I0108 21:26:45.009680    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/da89492c-e129-462d-b84e-2f4a10085550-tmp\") pod \"storage-provisioner\" (UID: \"da89492c-e129-462d-b84e-2f4a10085550\") " pod="kube-system/storage-provisioner"
	Jan 08 21:26:45 multinode-962345 kubelet[1258]: I0108 21:26:45.009701    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c1edff2-3b29-4045-b7b9-935c47115d16-config-volume\") pod \"coredns-5dd5756b68-v6dmd\" (UID: \"9c1edff2-3b29-4045-b7b9-935c47115d16\") " pod="kube-system/coredns-5dd5756b68-v6dmd"
	Jan 08 21:26:45 multinode-962345 kubelet[1258]: I0108 21:26:45.009721    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxkkd\" (UniqueName: \"kubernetes.io/projected/da89492c-e129-462d-b84e-2f4a10085550-kube-api-access-zxkkd\") pod \"storage-provisioner\" (UID: \"da89492c-e129-462d-b84e-2f4a10085550\") " pod="kube-system/storage-provisioner"
	Jan 08 21:26:45 multinode-962345 kubelet[1258]: I0108 21:26:45.968345    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.9683085479999995 podCreationTimestamp="2024-01-08 21:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:26:45.968251661 +0000 UTC m=+19.372189383" watchObservedRunningTime="2024-01-08 21:26:45.968308548 +0000 UTC m=+19.372246269"
	Jan 08 21:26:46 multinode-962345 kubelet[1258]: I0108 21:26:46.972760    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-v6dmd" podStartSLOduration=7.9727106469999995 podCreationTimestamp="2024-01-08 21:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:26:46.970267242 +0000 UTC m=+20.374204961" watchObservedRunningTime="2024-01-08 21:26:46.972710647 +0000 UTC m=+20.376648366"
	Jan 08 21:27:26 multinode-962345 kubelet[1258]: E0108 21:27:26.885369    1258 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:27:26 multinode-962345 kubelet[1258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:27:26 multinode-962345 kubelet[1258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:27:26 multinode-962345 kubelet[1258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:27:30 multinode-962345 kubelet[1258]: I0108 21:27:30.101751    1258 topology_manager.go:215] "Topology Admit Handler" podUID="84ab7957-5a65-40e2-a54b-138c6c0894f5" podNamespace="default" podName="busybox-5bc68d56bd-wmznk"
	Jan 08 21:27:30 multinode-962345 kubelet[1258]: I0108 21:27:30.149726    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddfv\" (UniqueName: \"kubernetes.io/projected/84ab7957-5a65-40e2-a54b-138c6c0894f5-kube-api-access-zddfv\") pod \"busybox-5bc68d56bd-wmznk\" (UID: \"84ab7957-5a65-40e2-a54b-138c6c0894f5\") " pod="default/busybox-5bc68d56bd-wmznk"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-962345 -n multinode-962345
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-962345 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (689.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-962345
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-962345
E0108 21:29:44.964318  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:29:56.855329  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-962345: exit status 82 (2m1.625740679s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-962345"  ...
	* Stopping node "multinode-962345"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-962345" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962345 --wait=true -v=8 --alsologtostderr
E0108 21:31:20.144625  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:32:44.574683  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:34:44.964123  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:34:56.855428  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:36:08.012553  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:37:44.574455  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:39:07.620415  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:39:44.964463  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:39:56.854458  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-962345 --wait=true -v=8 --alsologtostderr: (9m25.451701276s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-962345
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-962345 -n multinode-962345
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-962345 logs -n 25: (1.555059193s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-962345 ssh -n                                                                 | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-962345 cp multinode-962345-m02:/home/docker/cp-test.txt                       | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2245121153/001/cp-test_multinode-962345-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n                                                                 | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-962345 cp multinode-962345-m02:/home/docker/cp-test.txt                       | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345:/home/docker/cp-test_multinode-962345-m02_multinode-962345.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n                                                                 | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n multinode-962345 sudo cat                                       | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | /home/docker/cp-test_multinode-962345-m02_multinode-962345.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-962345 cp multinode-962345-m02:/home/docker/cp-test.txt                       | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m03:/home/docker/cp-test_multinode-962345-m02_multinode-962345-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n                                                                 | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n multinode-962345-m03 sudo cat                                   | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | /home/docker/cp-test_multinode-962345-m02_multinode-962345-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-962345 cp testdata/cp-test.txt                                                | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n                                                                 | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-962345 cp multinode-962345-m03:/home/docker/cp-test.txt                       | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2245121153/001/cp-test_multinode-962345-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n                                                                 | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-962345 cp multinode-962345-m03:/home/docker/cp-test.txt                       | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345:/home/docker/cp-test_multinode-962345-m03_multinode-962345.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n                                                                 | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n multinode-962345 sudo cat                                       | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | /home/docker/cp-test_multinode-962345-m03_multinode-962345.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-962345 cp multinode-962345-m03:/home/docker/cp-test.txt                       | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m02:/home/docker/cp-test_multinode-962345-m03_multinode-962345-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n                                                                 | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | multinode-962345-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-962345 ssh -n multinode-962345-m02 sudo cat                                   | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | /home/docker/cp-test_multinode-962345-m03_multinode-962345-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-962345 node stop m03                                                          | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	| node    | multinode-962345 node start                                                             | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:29 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-962345                                                                | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	| stop    | -p multinode-962345                                                                     | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	| start   | -p multinode-962345                                                                     | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC | 08 Jan 24 21:40 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-962345                                                                | multinode-962345 | jenkins | v1.32.0 | 08 Jan 24 21:40 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:31:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:31:06.328704  358628 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:31:06.328853  358628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:06.328864  358628 out.go:309] Setting ErrFile to fd 2...
	I0108 21:31:06.328872  358628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:06.329097  358628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:31:06.329688  358628 out.go:303] Setting JSON to false
	I0108 21:31:06.330708  358628 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7992,"bootTime":1704741474,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:31:06.330774  358628 start.go:138] virtualization: kvm guest
	I0108 21:31:06.333093  358628 out.go:177] * [multinode-962345] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:31:06.334425  358628 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:31:06.335721  358628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:31:06.334524  358628 notify.go:220] Checking for updates...
	I0108 21:31:06.337931  358628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:31:06.339177  358628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:31:06.340389  358628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:31:06.341526  358628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:31:06.343401  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:31:06.343481  358628 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:31:06.343897  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:31:06.343949  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:31:06.360926  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0108 21:31:06.361415  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:31:06.361959  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:31:06.361976  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:31:06.362343  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:31:06.362560  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:31:06.398064  358628 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 21:31:06.399338  358628 start.go:298] selected driver: kvm2
	I0108 21:31:06.399349  358628 start.go:902] validating driver "kvm2" against &{Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:31:06.399533  358628 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:31:06.399963  358628 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:06.400049  358628 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:31:06.415248  358628 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:31:06.416613  358628 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:31:06.416790  358628 cni.go:84] Creating CNI manager for ""
	I0108 21:31:06.416822  358628 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:31:06.416854  358628 start_flags.go:321] config:
	{Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pr
ovisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:31:06.417534  358628 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:06.419404  358628 out.go:177] * Starting control plane node multinode-962345 in cluster multinode-962345
	I0108 21:31:06.420636  358628 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:31:06.420666  358628 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:31:06.420676  358628 cache.go:56] Caching tarball of preloaded images
	I0108 21:31:06.420735  358628 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:31:06.420769  358628 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:31:06.420896  358628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:31:06.421082  358628 start.go:365] acquiring machines lock for multinode-962345: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:31:06.421120  358628 start.go:369] acquired machines lock for "multinode-962345" in 22.091µs
	I0108 21:31:06.421133  358628 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:31:06.421140  358628 fix.go:54] fixHost starting: 
	I0108 21:31:06.421392  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:31:06.421417  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:31:06.434648  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0108 21:31:06.435044  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:31:06.435534  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:31:06.435566  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:31:06.435855  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:31:06.436006  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:31:06.436206  358628 main.go:141] libmachine: (multinode-962345) Calling .GetState
	I0108 21:31:06.437600  358628 fix.go:102] recreateIfNeeded on multinode-962345: state=Running err=<nil>
	W0108 21:31:06.437617  358628 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:31:06.439389  358628 out.go:177] * Updating the running kvm2 "multinode-962345" VM ...
	I0108 21:31:06.440440  358628 machine.go:88] provisioning docker machine ...
	I0108 21:31:06.440457  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:31:06.440637  358628 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:31:06.440786  358628 buildroot.go:166] provisioning hostname "multinode-962345"
	I0108 21:31:06.440808  358628 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:31:06.440996  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:31:06.443389  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:31:06.443811  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:31:06.443831  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:31:06.443998  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:31:06.444166  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:31:06.444353  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:31:06.444454  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:31:06.444608  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:31:06.444937  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:31:06.444952  358628 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-962345 && echo "multinode-962345" | sudo tee /etc/hostname
	I0108 21:31:24.899707  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:31:30.983641  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:31:34.051641  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:31:40.131673  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:31:43.203672  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:31:49.283653  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:31:52.355619  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:31:58.435774  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:01.507666  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:07.587649  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:10.659788  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:16.739869  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:19.811679  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:25.891742  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:28.963761  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:35.043761  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:38.115722  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:44.195680  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:47.267707  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:53.347678  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:32:56.419676  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:02.499635  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:05.571658  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:11.651775  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:14.723632  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:20.803660  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:23.875746  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:29.955724  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:33.027736  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:39.107690  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:42.179622  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:48.259719  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:51.331686  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:33:57.411755  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:00.483695  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:06.563634  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:09.635679  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:15.715681  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:18.787698  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:24.867710  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:27.939707  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:34.019759  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:37.091718  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:43.171666  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:46.243769  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:52.323666  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:34:55.395632  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:01.475699  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:04.547656  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:10.627722  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:13.699680  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:19.779770  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:22.851766  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:28.931681  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:32.003665  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:38.083668  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:41.155725  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:47.235676  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:50.307744  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:56.387823  358628 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.239:22: connect: no route to host
	I0108 21:35:59.390212  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:35:59.390278  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:35:59.392449  358628 machine.go:91] provisioned docker machine in 4m52.951990441s
	I0108 21:35:59.392518  358628 fix.go:56] fixHost completed within 4m52.971378315s
	I0108 21:35:59.392527  358628 start.go:83] releasing machines lock for "multinode-962345", held for 4m52.971398934s
	W0108 21:35:59.392542  358628 start.go:694] error starting host: provision: host is not running
	W0108 21:35:59.392651  358628 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 21:35:59.392661  358628 start.go:709] Will try again in 5 seconds ...
	I0108 21:36:04.395695  358628 start.go:365] acquiring machines lock for multinode-962345: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:36:04.395846  358628 start.go:369] acquired machines lock for "multinode-962345" in 80.456µs
	I0108 21:36:04.395886  358628 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:36:04.395897  358628 fix.go:54] fixHost starting: 
	I0108 21:36:04.396243  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:04.396268  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:04.412887  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0108 21:36:04.413341  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:04.413859  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:36:04.413887  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:04.414278  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:04.414503  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:36:04.414719  358628 main.go:141] libmachine: (multinode-962345) Calling .GetState
	I0108 21:36:04.416365  358628 fix.go:102] recreateIfNeeded on multinode-962345: state=Stopped err=<nil>
	I0108 21:36:04.416389  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	W0108 21:36:04.416551  358628 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:36:04.419038  358628 out.go:177] * Restarting existing kvm2 VM for "multinode-962345" ...
	I0108 21:36:04.420449  358628 main.go:141] libmachine: (multinode-962345) Calling .Start
	I0108 21:36:04.420666  358628 main.go:141] libmachine: (multinode-962345) Ensuring networks are active...
	I0108 21:36:04.421434  358628 main.go:141] libmachine: (multinode-962345) Ensuring network default is active
	I0108 21:36:04.421775  358628 main.go:141] libmachine: (multinode-962345) Ensuring network mk-multinode-962345 is active
	I0108 21:36:04.422095  358628 main.go:141] libmachine: (multinode-962345) Getting domain xml...
	I0108 21:36:04.422735  358628 main.go:141] libmachine: (multinode-962345) Creating domain...
	I0108 21:36:05.656597  358628 main.go:141] libmachine: (multinode-962345) Waiting to get IP...
	I0108 21:36:05.657760  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:05.658255  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:05.658354  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:05.658230  359429 retry.go:31] will retry after 235.080905ms: waiting for machine to come up
	I0108 21:36:05.894988  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:05.895486  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:05.895512  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:05.895443  359429 retry.go:31] will retry after 243.201742ms: waiting for machine to come up
	I0108 21:36:06.139850  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:06.140262  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:06.140302  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:06.140207  359429 retry.go:31] will retry after 327.996285ms: waiting for machine to come up
	I0108 21:36:06.469674  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:06.470178  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:06.470203  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:06.470150  359429 retry.go:31] will retry after 383.508498ms: waiting for machine to come up
	I0108 21:36:06.856042  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:06.856525  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:06.856559  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:06.856460  359429 retry.go:31] will retry after 753.875ms: waiting for machine to come up
	I0108 21:36:07.612478  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:07.613121  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:07.613154  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:07.613067  359429 retry.go:31] will retry after 902.50094ms: waiting for machine to come up
	I0108 21:36:08.517224  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:08.517643  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:08.517661  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:08.517621  359429 retry.go:31] will retry after 906.392628ms: waiting for machine to come up
	I0108 21:36:09.425198  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:09.425682  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:09.425720  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:09.425603  359429 retry.go:31] will retry after 1.362923564s: waiting for machine to come up
	I0108 21:36:10.790252  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:10.790740  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:10.790773  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:10.790669  359429 retry.go:31] will retry after 1.499176777s: waiting for machine to come up
	I0108 21:36:12.292444  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:12.292854  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:12.292889  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:12.292816  359429 retry.go:31] will retry after 2.094583504s: waiting for machine to come up
	I0108 21:36:14.390148  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:14.390579  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:14.390625  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:14.390541  359429 retry.go:31] will retry after 2.905491771s: waiting for machine to come up
	I0108 21:36:17.300402  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:17.300884  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:17.300921  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:17.300854  359429 retry.go:31] will retry after 2.792500348s: waiting for machine to come up
	I0108 21:36:20.094582  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:20.095026  358628 main.go:141] libmachine: (multinode-962345) DBG | unable to find current IP address of domain multinode-962345 in network mk-multinode-962345
	I0108 21:36:20.095063  358628 main.go:141] libmachine: (multinode-962345) DBG | I0108 21:36:20.094957  359429 retry.go:31] will retry after 4.53987869s: waiting for machine to come up
	I0108 21:36:24.636004  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.636484  358628 main.go:141] libmachine: (multinode-962345) Found IP for machine: 192.168.39.239
	I0108 21:36:24.636500  358628 main.go:141] libmachine: (multinode-962345) Reserving static IP address...
	I0108 21:36:24.636520  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has current primary IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.637033  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "multinode-962345", mac: "52:54:00:cf:54:bf", ip: "192.168.39.239"} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:24.637058  358628 main.go:141] libmachine: (multinode-962345) DBG | skip adding static IP to network mk-multinode-962345 - found existing host DHCP lease matching {name: "multinode-962345", mac: "52:54:00:cf:54:bf", ip: "192.168.39.239"}
	I0108 21:36:24.637079  358628 main.go:141] libmachine: (multinode-962345) Reserved static IP address: 192.168.39.239
	I0108 21:36:24.637096  358628 main.go:141] libmachine: (multinode-962345) Waiting for SSH to be available...
	I0108 21:36:24.637104  358628 main.go:141] libmachine: (multinode-962345) DBG | Getting to WaitForSSH function...
	I0108 21:36:24.639139  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.639473  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:24.639503  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.639577  358628 main.go:141] libmachine: (multinode-962345) DBG | Using SSH client type: external
	I0108 21:36:24.639641  358628 main.go:141] libmachine: (multinode-962345) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa (-rw-------)
	I0108 21:36:24.639683  358628 main.go:141] libmachine: (multinode-962345) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:36:24.639790  358628 main.go:141] libmachine: (multinode-962345) DBG | About to run SSH command:
	I0108 21:36:24.639813  358628 main.go:141] libmachine: (multinode-962345) DBG | exit 0
	I0108 21:36:24.727125  358628 main.go:141] libmachine: (multinode-962345) DBG | SSH cmd err, output: <nil>: 
	I0108 21:36:24.727567  358628 main.go:141] libmachine: (multinode-962345) Calling .GetConfigRaw
	I0108 21:36:24.728302  358628 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:36:24.731072  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.731511  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:24.731546  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.731829  358628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:36:24.732081  358628 machine.go:88] provisioning docker machine ...
	I0108 21:36:24.732104  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:36:24.732326  358628 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:36:24.732490  358628 buildroot.go:166] provisioning hostname "multinode-962345"
	I0108 21:36:24.732510  358628 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:36:24.732686  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:24.734752  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.735161  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:24.735190  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.735430  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:36:24.735616  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:24.735760  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:24.735911  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:36:24.736064  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:36:24.736472  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:36:24.736491  358628 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-962345 && echo "multinode-962345" | sudo tee /etc/hostname
	I0108 21:36:24.864534  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-962345
	
	I0108 21:36:24.864572  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:24.867341  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.867764  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:24.867799  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.868004  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:36:24.868304  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:24.868512  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:24.868673  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:36:24.868851  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:36:24.869263  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:36:24.869288  358628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-962345' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-962345/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-962345' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:36:24.991992  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:36:24.992056  358628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 21:36:24.992091  358628 buildroot.go:174] setting up certificates
	I0108 21:36:24.992107  358628 provision.go:83] configureAuth start
	I0108 21:36:24.992126  358628 main.go:141] libmachine: (multinode-962345) Calling .GetMachineName
	I0108 21:36:24.992460  358628 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:36:24.995241  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.995598  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:24.995636  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.995749  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:24.998301  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.998730  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:24.998774  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:24.998903  358628 provision.go:138] copyHostCerts
	I0108 21:36:24.998988  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:36:24.999081  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 21:36:24.999095  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:36:24.999157  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 21:36:24.999238  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:36:24.999264  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 21:36:24.999271  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:36:24.999295  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 21:36:24.999338  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:36:24.999353  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 21:36:24.999375  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:36:24.999407  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 21:36:24.999460  358628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.multinode-962345 san=[192.168.39.239 192.168.39.239 localhost 127.0.0.1 minikube multinode-962345]
	I0108 21:36:25.160449  358628 provision.go:172] copyRemoteCerts
	I0108 21:36:25.160514  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:36:25.160563  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:25.163212  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.163572  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:25.163601  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.163799  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:36:25.164000  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:25.164156  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:36:25.164286  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:36:25.248266  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:36:25.248337  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:36:25.271972  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:36:25.272054  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:36:25.294706  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:36:25.294777  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 21:36:25.318798  358628 provision.go:86] duration metric: configureAuth took 326.673502ms
	I0108 21:36:25.318829  358628 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:36:25.319085  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:36:25.319178  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:25.321974  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.322340  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:25.322372  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.322519  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:36:25.322732  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:25.322904  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:25.323060  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:36:25.323245  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:36:25.323587  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:36:25.323603  358628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:36:25.618618  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:36:25.618645  358628 machine.go:91] provisioned docker machine in 886.547582ms
	I0108 21:36:25.618655  358628 start.go:300] post-start starting for "multinode-962345" (driver="kvm2")
	I0108 21:36:25.618666  358628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:36:25.618683  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:36:25.619039  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:36:25.619080  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:25.621886  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.622226  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:25.622251  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.622537  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:36:25.622734  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:25.622927  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:36:25.623085  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:36:25.710232  358628 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:36:25.714576  358628 command_runner.go:130] > NAME=Buildroot
	I0108 21:36:25.714601  358628 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0108 21:36:25.714627  358628 command_runner.go:130] > ID=buildroot
	I0108 21:36:25.714636  358628 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:36:25.714643  358628 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:36:25.714766  358628 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:36:25.714789  358628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 21:36:25.714880  358628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 21:36:25.714956  358628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 21:36:25.714973  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /etc/ssl/certs/3419822.pem
	I0108 21:36:25.715063  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:36:25.725478  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:36:25.750868  358628 start.go:303] post-start completed in 132.194326ms
	I0108 21:36:25.750903  358628 fix.go:56] fixHost completed within 21.35500509s
	I0108 21:36:25.750932  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:25.753822  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.754111  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:25.754139  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.754325  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:36:25.754552  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:25.754741  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:25.754905  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:36:25.755125  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:36:25.755547  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 21:36:25.755563  358628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:36:25.868585  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704749785.816040122
	
	I0108 21:36:25.868617  358628 fix.go:206] guest clock: 1704749785.816040122
	I0108 21:36:25.868629  358628 fix.go:219] Guest: 2024-01-08 21:36:25.816040122 +0000 UTC Remote: 2024-01-08 21:36:25.750908514 +0000 UTC m=+319.474543100 (delta=65.131608ms)
	I0108 21:36:25.868650  358628 fix.go:190] guest clock delta is within tolerance: 65.131608ms
	I0108 21:36:25.868654  358628 start.go:83] releasing machines lock for "multinode-962345", held for 21.472800154s
	I0108 21:36:25.868682  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:36:25.868957  358628 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:36:25.871770  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.872135  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:25.872169  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.872340  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:36:25.872815  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:36:25.872999  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:36:25.873086  358628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:36:25.873125  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:25.873260  358628 ssh_runner.go:195] Run: cat /version.json
	I0108 21:36:25.873312  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:36:25.875948  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.876321  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.876361  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:25.876403  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.876469  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:36:25.876639  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:25.876750  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:25.876777  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:36:25.876822  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:25.876942  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:36:25.876937  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:36:25.877100  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:36:25.877248  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:36:25.877363  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:36:25.956048  358628 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0108 21:36:25.956328  358628 ssh_runner.go:195] Run: systemctl --version
	I0108 21:36:25.985580  358628 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:36:25.986187  358628 command_runner.go:130] > systemd 247 (247)
	I0108 21:36:25.986213  358628 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 21:36:25.986283  358628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:36:26.128984  358628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:36:26.135676  358628 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 21:36:26.135730  358628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:36:26.135793  358628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:36:26.150474  358628 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:36:26.150558  358628 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:36:26.150570  358628 start.go:475] detecting cgroup driver to use...
	I0108 21:36:26.150675  358628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:36:26.164300  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:36:26.177345  358628 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:36:26.177438  358628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:36:26.190031  358628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:36:26.202766  358628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:36:26.313372  358628 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 21:36:26.313468  358628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:36:26.327178  358628 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 21:36:26.438271  358628 docker.go:219] disabling docker service ...
	I0108 21:36:26.438355  358628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:36:26.451453  358628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:36:26.461957  358628 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 21:36:26.462532  358628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:36:26.575146  358628 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 21:36:26.575255  358628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:36:26.588239  358628 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 21:36:26.588686  358628 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 21:36:26.684448  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:36:26.697590  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:36:26.714163  358628 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 21:36:26.714212  358628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:36:26.714281  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:36:26.723115  358628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:36:26.723192  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:36:26.732245  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:36:26.741074  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:36:26.750084  358628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:36:26.759568  358628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:36:26.767478  358628 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:36:26.767522  358628 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:36:26.767566  358628 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 21:36:26.779144  358628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:36:26.788281  358628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:36:26.908028  358628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:36:27.075621  358628 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:36:27.075693  358628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:36:27.080423  358628 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 21:36:27.080446  358628 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:36:27.080456  358628 command_runner.go:130] > Device: 16h/22d	Inode: 794         Links: 1
	I0108 21:36:27.080463  358628 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:36:27.080468  358628 command_runner.go:130] > Access: 2024-01-08 21:36:27.010212418 +0000
	I0108 21:36:27.080476  358628 command_runner.go:130] > Modify: 2024-01-08 21:36:27.010212418 +0000
	I0108 21:36:27.080481  358628 command_runner.go:130] > Change: 2024-01-08 21:36:27.010212418 +0000
	I0108 21:36:27.080484  358628 command_runner.go:130] >  Birth: -
	I0108 21:36:27.080674  358628 start.go:543] Will wait 60s for crictl version
	I0108 21:36:27.080750  358628 ssh_runner.go:195] Run: which crictl
	I0108 21:36:27.084272  358628 command_runner.go:130] > /usr/bin/crictl
	I0108 21:36:27.084539  358628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:36:27.122106  358628 command_runner.go:130] > Version:  0.1.0
	I0108 21:36:27.122136  358628 command_runner.go:130] > RuntimeName:  cri-o
	I0108 21:36:27.122144  358628 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 21:36:27.122154  358628 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:36:27.123895  358628 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:36:27.124002  358628 ssh_runner.go:195] Run: crio --version
	I0108 21:36:27.171714  358628 command_runner.go:130] > crio version 1.24.1
	I0108 21:36:27.171737  358628 command_runner.go:130] > Version:          1.24.1
	I0108 21:36:27.171743  358628 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:36:27.171750  358628 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:36:27.171758  358628 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:36:27.171765  358628 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:36:27.171770  358628 command_runner.go:130] > Compiler:         gc
	I0108 21:36:27.171777  358628 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:36:27.171785  358628 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:36:27.171799  358628 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:36:27.171810  358628 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:36:27.171820  358628 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:36:27.171912  358628 ssh_runner.go:195] Run: crio --version
	I0108 21:36:27.214386  358628 command_runner.go:130] > crio version 1.24.1
	I0108 21:36:27.214420  358628 command_runner.go:130] > Version:          1.24.1
	I0108 21:36:27.214427  358628 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:36:27.214434  358628 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:36:27.214441  358628 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:36:27.214445  358628 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:36:27.214449  358628 command_runner.go:130] > Compiler:         gc
	I0108 21:36:27.214453  358628 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:36:27.214467  358628 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:36:27.214478  358628 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:36:27.214489  358628 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:36:27.214496  358628 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:36:27.216485  358628 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:36:27.217839  358628 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:36:27.220687  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:27.221063  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:36:27.221094  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:36:27.221354  358628 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:36:27.225282  358628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:36:27.237886  358628 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:36:27.237962  358628 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:36:27.277505  358628 command_runner.go:130] > {
	I0108 21:36:27.277526  358628 command_runner.go:130] >   "images": [
	I0108 21:36:27.277530  358628 command_runner.go:130] >     {
	I0108 21:36:27.277538  358628 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 21:36:27.277547  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:27.277557  358628 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 21:36:27.277561  358628 command_runner.go:130] >       ],
	I0108 21:36:27.277565  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:27.277573  358628 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 21:36:27.277583  358628 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 21:36:27.277587  358628 command_runner.go:130] >       ],
	I0108 21:36:27.277591  358628 command_runner.go:130] >       "size": "750414",
	I0108 21:36:27.277596  358628 command_runner.go:130] >       "uid": {
	I0108 21:36:27.277600  358628 command_runner.go:130] >         "value": "65535"
	I0108 21:36:27.277605  358628 command_runner.go:130] >       },
	I0108 21:36:27.277624  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:27.277638  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:27.277642  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:27.277645  358628 command_runner.go:130] >     }
	I0108 21:36:27.277649  358628 command_runner.go:130] >   ]
	I0108 21:36:27.277655  358628 command_runner.go:130] > }
	I0108 21:36:27.277774  358628 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 21:36:27.277834  358628 ssh_runner.go:195] Run: which lz4
	I0108 21:36:27.281540  358628 command_runner.go:130] > /usr/bin/lz4
	I0108 21:36:27.281714  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 21:36:27.281819  358628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:36:27.286113  358628 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:36:27.286171  358628 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:36:27.286188  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 21:36:29.137351  358628 crio.go:444] Took 1.855565 seconds to copy over tarball
	I0108 21:36:29.137443  358628 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:36:32.009715  358628 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.872230086s)
	I0108 21:36:32.009760  358628 crio.go:451] Took 2.872377 seconds to extract the tarball
	I0108 21:36:32.009773  358628 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:36:32.050900  358628 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:36:32.106159  358628 command_runner.go:130] > {
	I0108 21:36:32.106185  358628 command_runner.go:130] >   "images": [
	I0108 21:36:32.106195  358628 command_runner.go:130] >     {
	I0108 21:36:32.106207  358628 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 21:36:32.106213  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.106220  358628 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 21:36:32.106225  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106232  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.106249  358628 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 21:36:32.106264  358628 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 21:36:32.106276  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106284  358628 command_runner.go:130] >       "size": "65258016",
	I0108 21:36:32.106292  358628 command_runner.go:130] >       "uid": null,
	I0108 21:36:32.106300  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.106310  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.106324  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.106330  358628 command_runner.go:130] >     },
	I0108 21:36:32.106337  358628 command_runner.go:130] >     {
	I0108 21:36:32.106350  358628 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 21:36:32.106365  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.106393  358628 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 21:36:32.106403  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106409  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.106423  358628 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 21:36:32.106442  358628 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 21:36:32.106452  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106470  358628 command_runner.go:130] >       "size": "31470524",
	I0108 21:36:32.106480  358628 command_runner.go:130] >       "uid": null,
	I0108 21:36:32.106495  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.106507  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.106517  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.106524  358628 command_runner.go:130] >     },
	I0108 21:36:32.106534  358628 command_runner.go:130] >     {
	I0108 21:36:32.106546  358628 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 21:36:32.106557  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.106569  358628 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 21:36:32.106578  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106586  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.106600  358628 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 21:36:32.106616  358628 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 21:36:32.106626  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106634  358628 command_runner.go:130] >       "size": "53621675",
	I0108 21:36:32.106645  358628 command_runner.go:130] >       "uid": null,
	I0108 21:36:32.106654  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.106664  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.106676  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.106689  358628 command_runner.go:130] >     },
	I0108 21:36:32.106698  358628 command_runner.go:130] >     {
	I0108 21:36:32.106709  358628 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 21:36:32.106719  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.106732  358628 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 21:36:32.106742  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106750  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.106765  358628 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 21:36:32.106781  358628 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 21:36:32.106799  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106810  358628 command_runner.go:130] >       "size": "295456551",
	I0108 21:36:32.106820  358628 command_runner.go:130] >       "uid": {
	I0108 21:36:32.106829  358628 command_runner.go:130] >         "value": "0"
	I0108 21:36:32.106838  358628 command_runner.go:130] >       },
	I0108 21:36:32.106846  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.106857  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.106865  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.106874  358628 command_runner.go:130] >     },
	I0108 21:36:32.106884  358628 command_runner.go:130] >     {
	I0108 21:36:32.106896  358628 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 21:36:32.106906  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.106917  358628 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 21:36:32.106927  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106935  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.106950  358628 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 21:36:32.106966  358628 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 21:36:32.106975  358628 command_runner.go:130] >       ],
	I0108 21:36:32.106984  358628 command_runner.go:130] >       "size": "127226832",
	I0108 21:36:32.106993  358628 command_runner.go:130] >       "uid": {
	I0108 21:36:32.107003  358628 command_runner.go:130] >         "value": "0"
	I0108 21:36:32.107011  358628 command_runner.go:130] >       },
	I0108 21:36:32.107024  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.107034  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.107043  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.107052  358628 command_runner.go:130] >     },
	I0108 21:36:32.107059  358628 command_runner.go:130] >     {
	I0108 21:36:32.107076  358628 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 21:36:32.107086  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.107096  358628 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 21:36:32.107105  358628 command_runner.go:130] >       ],
	I0108 21:36:32.107113  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.107129  358628 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 21:36:32.107145  358628 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 21:36:32.107155  358628 command_runner.go:130] >       ],
	I0108 21:36:32.107165  358628 command_runner.go:130] >       "size": "123261750",
	I0108 21:36:32.107175  358628 command_runner.go:130] >       "uid": {
	I0108 21:36:32.107186  358628 command_runner.go:130] >         "value": "0"
	I0108 21:36:32.107194  358628 command_runner.go:130] >       },
	I0108 21:36:32.107202  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.107213  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.107223  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.107230  358628 command_runner.go:130] >     },
	I0108 21:36:32.107239  358628 command_runner.go:130] >     {
	I0108 21:36:32.107250  358628 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 21:36:32.107269  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.107281  358628 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 21:36:32.107291  358628 command_runner.go:130] >       ],
	I0108 21:36:32.107301  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.107321  358628 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 21:36:32.107337  358628 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 21:36:32.107346  358628 command_runner.go:130] >       ],
	I0108 21:36:32.107355  358628 command_runner.go:130] >       "size": "74749335",
	I0108 21:36:32.107380  358628 command_runner.go:130] >       "uid": null,
	I0108 21:36:32.107391  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.107402  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.107411  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.107420  358628 command_runner.go:130] >     },
	I0108 21:36:32.107427  358628 command_runner.go:130] >     {
	I0108 21:36:32.107441  358628 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 21:36:32.107449  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.107461  358628 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 21:36:32.107471  358628 command_runner.go:130] >       ],
	I0108 21:36:32.107484  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.107516  358628 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 21:36:32.107532  358628 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 21:36:32.107541  358628 command_runner.go:130] >       ],
	I0108 21:36:32.107550  358628 command_runner.go:130] >       "size": "61551410",
	I0108 21:36:32.107560  358628 command_runner.go:130] >       "uid": {
	I0108 21:36:32.107571  358628 command_runner.go:130] >         "value": "0"
	I0108 21:36:32.107578  358628 command_runner.go:130] >       },
	I0108 21:36:32.107589  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.107597  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.107607  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.107613  358628 command_runner.go:130] >     },
	I0108 21:36:32.107620  358628 command_runner.go:130] >     {
	I0108 21:36:32.107634  358628 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 21:36:32.107644  358628 command_runner.go:130] >       "repoTags": [
	I0108 21:36:32.107656  358628 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 21:36:32.107665  358628 command_runner.go:130] >       ],
	I0108 21:36:32.107673  358628 command_runner.go:130] >       "repoDigests": [
	I0108 21:36:32.107692  358628 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 21:36:32.107708  358628 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 21:36:32.107717  358628 command_runner.go:130] >       ],
	I0108 21:36:32.107727  358628 command_runner.go:130] >       "size": "750414",
	I0108 21:36:32.107737  358628 command_runner.go:130] >       "uid": {
	I0108 21:36:32.107748  358628 command_runner.go:130] >         "value": "65535"
	I0108 21:36:32.107755  358628 command_runner.go:130] >       },
	I0108 21:36:32.107765  358628 command_runner.go:130] >       "username": "",
	I0108 21:36:32.107775  358628 command_runner.go:130] >       "spec": null,
	I0108 21:36:32.107785  358628 command_runner.go:130] >       "pinned": false
	I0108 21:36:32.107794  358628 command_runner.go:130] >     }
	I0108 21:36:32.107800  358628 command_runner.go:130] >   ]
	I0108 21:36:32.107807  358628 command_runner.go:130] > }
	I0108 21:36:32.107927  358628 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:36:32.107941  358628 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:36:32.108014  358628 ssh_runner.go:195] Run: crio config
	I0108 21:36:32.158053  358628 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 21:36:32.158099  358628 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 21:36:32.158109  358628 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 21:36:32.158115  358628 command_runner.go:130] > #
	I0108 21:36:32.158124  358628 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 21:36:32.158131  358628 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 21:36:32.158141  358628 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 21:36:32.158150  358628 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 21:36:32.158155  358628 command_runner.go:130] > # reload'.
	I0108 21:36:32.158165  358628 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 21:36:32.158177  358628 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 21:36:32.158188  358628 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 21:36:32.158204  358628 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 21:36:32.158210  358628 command_runner.go:130] > [crio]
	I0108 21:36:32.158221  358628 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 21:36:32.158234  358628 command_runner.go:130] > # containers images, in this directory.
	I0108 21:36:32.158242  358628 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 21:36:32.158268  358628 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 21:36:32.158280  358628 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 21:36:32.158297  358628 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 21:36:32.158311  358628 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 21:36:32.158322  358628 command_runner.go:130] > storage_driver = "overlay"
	I0108 21:36:32.158333  358628 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 21:36:32.158347  358628 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 21:36:32.158358  358628 command_runner.go:130] > storage_option = [
	I0108 21:36:32.158368  358628 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 21:36:32.158382  358628 command_runner.go:130] > ]
	I0108 21:36:32.158397  358628 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 21:36:32.158411  358628 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 21:36:32.158422  358628 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 21:36:32.158435  358628 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 21:36:32.158449  358628 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 21:36:32.158461  358628 command_runner.go:130] > # always happen on a node reboot
	I0108 21:36:32.158473  358628 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 21:36:32.158486  358628 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 21:36:32.158500  358628 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 21:36:32.158521  358628 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 21:36:32.158537  358628 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 21:36:32.158551  358628 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 21:36:32.158568  358628 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 21:36:32.158579  358628 command_runner.go:130] > # internal_wipe = true
	I0108 21:36:32.158589  358628 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 21:36:32.158603  358628 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 21:36:32.158616  358628 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 21:36:32.158628  358628 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 21:36:32.158644  358628 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 21:36:32.158654  358628 command_runner.go:130] > [crio.api]
	I0108 21:36:32.158664  358628 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 21:36:32.158675  358628 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 21:36:32.158688  358628 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 21:36:32.158699  358628 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 21:36:32.158714  358628 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 21:36:32.158726  358628 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 21:36:32.158737  358628 command_runner.go:130] > # stream_port = "0"
	I0108 21:36:32.158745  358628 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 21:36:32.158760  358628 command_runner.go:130] > # stream_enable_tls = false
	I0108 21:36:32.158770  358628 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 21:36:32.158778  358628 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 21:36:32.158784  358628 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 21:36:32.158792  358628 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 21:36:32.158796  358628 command_runner.go:130] > # minutes.
	I0108 21:36:32.158801  358628 command_runner.go:130] > # stream_tls_cert = ""
	I0108 21:36:32.158809  358628 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 21:36:32.158816  358628 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 21:36:32.158826  358628 command_runner.go:130] > # stream_tls_key = ""
	I0108 21:36:32.158836  358628 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 21:36:32.158844  358628 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 21:36:32.158850  358628 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 21:36:32.158856  358628 command_runner.go:130] > # stream_tls_ca = ""
	I0108 21:36:32.158863  358628 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:36:32.158867  358628 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 21:36:32.158874  358628 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:36:32.158884  358628 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 21:36:32.158916  358628 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 21:36:32.158930  358628 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 21:36:32.158938  358628 command_runner.go:130] > [crio.runtime]
	I0108 21:36:32.158951  358628 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 21:36:32.158959  358628 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 21:36:32.158966  358628 command_runner.go:130] > # "nofile=1024:2048"
	I0108 21:36:32.158979  358628 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 21:36:32.158989  358628 command_runner.go:130] > # default_ulimits = [
	I0108 21:36:32.158995  358628 command_runner.go:130] > # ]
	I0108 21:36:32.159008  358628 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 21:36:32.159018  358628 command_runner.go:130] > # no_pivot = false
	I0108 21:36:32.159028  358628 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 21:36:32.159042  358628 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 21:36:32.159053  358628 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 21:36:32.159063  358628 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 21:36:32.159074  358628 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 21:36:32.159087  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:36:32.159098  358628 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 21:36:32.159111  358628 command_runner.go:130] > # Cgroup setting for conmon
	I0108 21:36:32.159126  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 21:36:32.159136  358628 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 21:36:32.159145  358628 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 21:36:32.159157  358628 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 21:36:32.159171  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:36:32.159181  358628 command_runner.go:130] > conmon_env = [
	I0108 21:36:32.159193  358628 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 21:36:32.159201  358628 command_runner.go:130] > ]
	I0108 21:36:32.159207  358628 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 21:36:32.159215  358628 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 21:36:32.159222  358628 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 21:36:32.159232  358628 command_runner.go:130] > # default_env = [
	I0108 21:36:32.159237  358628 command_runner.go:130] > # ]
	I0108 21:36:32.159251  358628 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 21:36:32.159261  358628 command_runner.go:130] > # selinux = false
	I0108 21:36:32.159274  358628 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 21:36:32.159285  358628 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 21:36:32.159300  358628 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 21:36:32.159312  358628 command_runner.go:130] > # seccomp_profile = ""
	I0108 21:36:32.159325  358628 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 21:36:32.159337  358628 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 21:36:32.159349  358628 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 21:36:32.159396  358628 command_runner.go:130] > # which might increase security.
	I0108 21:36:32.159406  358628 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 21:36:32.159419  358628 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 21:36:32.159433  358628 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 21:36:32.159444  358628 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 21:36:32.159457  358628 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 21:36:32.159469  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:36:32.159478  358628 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 21:36:32.159491  358628 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 21:36:32.159501  358628 command_runner.go:130] > # the cgroup blockio controller.
	I0108 21:36:32.159505  358628 command_runner.go:130] > # blockio_config_file = ""
	I0108 21:36:32.159515  358628 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 21:36:32.159525  358628 command_runner.go:130] > # irqbalance daemon.
	I0108 21:36:32.159541  358628 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 21:36:32.159555  358628 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 21:36:32.159567  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:36:32.159577  358628 command_runner.go:130] > # rdt_config_file = ""
	I0108 21:36:32.159586  358628 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 21:36:32.159594  358628 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 21:36:32.159602  358628 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 21:36:32.159612  358628 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 21:36:32.159623  358628 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 21:36:32.159636  358628 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 21:36:32.159646  358628 command_runner.go:130] > # will be added.
	I0108 21:36:32.159654  358628 command_runner.go:130] > # default_capabilities = [
	I0108 21:36:32.159663  358628 command_runner.go:130] > # 	"CHOWN",
	I0108 21:36:32.159670  358628 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 21:36:32.159679  358628 command_runner.go:130] > # 	"FSETID",
	I0108 21:36:32.159684  358628 command_runner.go:130] > # 	"FOWNER",
	I0108 21:36:32.159694  358628 command_runner.go:130] > # 	"SETGID",
	I0108 21:36:32.159700  358628 command_runner.go:130] > # 	"SETUID",
	I0108 21:36:32.159713  358628 command_runner.go:130] > # 	"SETPCAP",
	I0108 21:36:32.159723  358628 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 21:36:32.159731  358628 command_runner.go:130] > # 	"KILL",
	I0108 21:36:32.159740  358628 command_runner.go:130] > # ]
	I0108 21:36:32.159754  358628 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 21:36:32.159764  358628 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:36:32.159768  358628 command_runner.go:130] > # default_sysctls = [
	I0108 21:36:32.159773  358628 command_runner.go:130] > # ]
	I0108 21:36:32.159783  358628 command_runner.go:130] > # List of devices on the host that a
	I0108 21:36:32.159796  358628 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 21:36:32.159804  358628 command_runner.go:130] > # allowed_devices = [
	I0108 21:36:32.159814  358628 command_runner.go:130] > # 	"/dev/fuse",
	I0108 21:36:32.159822  358628 command_runner.go:130] > # ]
	I0108 21:36:32.159833  358628 command_runner.go:130] > # List of additional devices. specified as
	I0108 21:36:32.159848  358628 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 21:36:32.159859  358628 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 21:36:32.159899  358628 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:36:32.159909  358628 command_runner.go:130] > # additional_devices = [
	I0108 21:36:32.159918  358628 command_runner.go:130] > # ]
	I0108 21:36:32.159930  358628 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 21:36:32.159940  358628 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 21:36:32.159947  358628 command_runner.go:130] > # 	"/etc/cdi",
	I0108 21:36:32.159957  358628 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 21:36:32.159964  358628 command_runner.go:130] > # ]
	I0108 21:36:32.159975  358628 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 21:36:32.159988  358628 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 21:36:32.159997  358628 command_runner.go:130] > # Defaults to false.
	I0108 21:36:32.160008  358628 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 21:36:32.160022  358628 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 21:36:32.160036  358628 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 21:36:32.160045  358628 command_runner.go:130] > # hooks_dir = [
	I0108 21:36:32.160054  358628 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 21:36:32.160063  358628 command_runner.go:130] > # ]
	I0108 21:36:32.160073  358628 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 21:36:32.160086  358628 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 21:36:32.160097  358628 command_runner.go:130] > # its default mounts from the following two files:
	I0108 21:36:32.160109  358628 command_runner.go:130] > #
	I0108 21:36:32.160119  358628 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 21:36:32.160136  358628 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 21:36:32.160149  358628 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 21:36:32.160155  358628 command_runner.go:130] > #
	I0108 21:36:32.160168  358628 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 21:36:32.160182  358628 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 21:36:32.160195  358628 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 21:36:32.160207  358628 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 21:36:32.160215  358628 command_runner.go:130] > #
	I0108 21:36:32.160219  358628 command_runner.go:130] > # default_mounts_file = ""
	I0108 21:36:32.160229  358628 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 21:36:32.160249  358628 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 21:36:32.160259  358628 command_runner.go:130] > pids_limit = 1024
	I0108 21:36:32.160269  358628 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 21:36:32.160283  358628 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 21:36:32.160297  358628 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 21:36:32.160313  358628 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 21:36:32.160327  358628 command_runner.go:130] > # log_size_max = -1
	I0108 21:36:32.160339  358628 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 21:36:32.160350  358628 command_runner.go:130] > # log_to_journald = false
	I0108 21:36:32.160361  358628 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 21:36:32.160373  358628 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 21:36:32.160389  358628 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 21:36:32.160397  358628 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 21:36:32.160403  358628 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 21:36:32.160413  358628 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 21:36:32.160425  358628 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 21:36:32.160433  358628 command_runner.go:130] > # read_only = false
	I0108 21:36:32.160446  358628 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 21:36:32.160459  358628 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 21:36:32.160469  358628 command_runner.go:130] > # live configuration reload.
	I0108 21:36:32.160478  358628 command_runner.go:130] > # log_level = "info"
	I0108 21:36:32.160490  358628 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 21:36:32.160497  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:36:32.160503  358628 command_runner.go:130] > # log_filter = ""
	I0108 21:36:32.160519  358628 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 21:36:32.160533  358628 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 21:36:32.160543  358628 command_runner.go:130] > # separated by comma.
	I0108 21:36:32.160556  358628 command_runner.go:130] > # uid_mappings = ""
	I0108 21:36:32.160576  358628 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 21:36:32.160584  358628 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 21:36:32.160621  358628 command_runner.go:130] > # separated by comma.
	I0108 21:36:32.160637  358628 command_runner.go:130] > # gid_mappings = ""
	I0108 21:36:32.160648  358628 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 21:36:32.160662  358628 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:36:32.160675  358628 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:36:32.160690  358628 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 21:36:32.160702  358628 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 21:36:32.160711  358628 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:36:32.160722  358628 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:36:32.160733  358628 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 21:36:32.160744  358628 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 21:36:32.160757  358628 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 21:36:32.160774  358628 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 21:36:32.160784  358628 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 21:36:32.160797  358628 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 21:36:32.160806  358628 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 21:36:32.160814  358628 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 21:36:32.160826  358628 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 21:36:32.160836  358628 command_runner.go:130] > drop_infra_ctr = false
	I0108 21:36:32.160847  358628 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 21:36:32.160859  358628 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 21:36:32.160874  358628 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 21:36:32.160883  358628 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 21:36:32.160889  358628 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 21:36:32.160899  358628 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 21:36:32.160908  358628 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 21:36:32.160923  358628 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 21:36:32.160933  358628 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 21:36:32.160944  358628 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 21:36:32.160958  358628 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 21:36:32.160974  358628 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 21:36:32.160984  358628 command_runner.go:130] > # default_runtime = "runc"
	I0108 21:36:32.160990  358628 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 21:36:32.161006  358628 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 21:36:32.161025  358628 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 21:36:32.161037  358628 command_runner.go:130] > # creation as a file is not desired either.
	I0108 21:36:32.161052  358628 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 21:36:32.161064  358628 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 21:36:32.161074  358628 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 21:36:32.161083  358628 command_runner.go:130] > # ]
	I0108 21:36:32.161089  358628 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 21:36:32.161101  358628 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 21:36:32.161115  358628 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 21:36:32.161129  358628 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 21:36:32.161137  358628 command_runner.go:130] > #
	I0108 21:36:32.161145  358628 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 21:36:32.161157  358628 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 21:36:32.161168  358628 command_runner.go:130] > #  runtime_type = "oci"
	I0108 21:36:32.161187  358628 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 21:36:32.161201  358628 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 21:36:32.161212  358628 command_runner.go:130] > #  allowed_annotations = []
	I0108 21:36:32.161218  358628 command_runner.go:130] > # Where:
	I0108 21:36:32.161231  358628 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 21:36:32.161244  358628 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 21:36:32.161257  358628 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 21:36:32.161270  358628 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 21:36:32.161279  358628 command_runner.go:130] > #   in $PATH.
	I0108 21:36:32.161286  358628 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 21:36:32.161296  358628 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 21:36:32.161310  358628 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 21:36:32.161320  358628 command_runner.go:130] > #   state.
	I0108 21:36:32.161331  358628 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 21:36:32.161344  358628 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 21:36:32.161356  358628 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 21:36:32.161369  358628 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 21:36:32.161380  358628 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 21:36:32.161399  358628 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 21:36:32.161412  358628 command_runner.go:130] > #   The currently recognized values are:
	I0108 21:36:32.161423  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 21:36:32.161440  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 21:36:32.161452  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 21:36:32.161464  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 21:36:32.161479  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 21:36:32.161498  358628 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 21:36:32.161512  358628 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 21:36:32.161526  358628 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 21:36:32.161538  358628 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 21:36:32.161549  358628 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 21:36:32.161558  358628 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 21:36:32.161568  358628 command_runner.go:130] > runtime_type = "oci"
	I0108 21:36:32.161580  358628 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 21:36:32.161590  358628 command_runner.go:130] > runtime_config_path = ""
	I0108 21:36:32.161600  358628 command_runner.go:130] > monitor_path = ""
	I0108 21:36:32.161606  358628 command_runner.go:130] > monitor_cgroup = ""
	I0108 21:36:32.161620  358628 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 21:36:32.161633  358628 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 21:36:32.161639  358628 command_runner.go:130] > # running containers
	I0108 21:36:32.161645  358628 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 21:36:32.161654  358628 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 21:36:32.161716  358628 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 21:36:32.161737  358628 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 21:36:32.161746  358628 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 21:36:32.161754  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 21:36:32.161765  358628 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 21:36:32.161773  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 21:36:32.161779  358628 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 21:36:32.161784  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 21:36:32.161793  358628 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 21:36:32.161801  358628 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 21:36:32.161809  358628 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 21:36:32.161816  358628 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 21:36:32.161826  358628 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 21:36:32.161834  358628 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 21:36:32.161845  358628 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 21:36:32.161857  358628 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 21:36:32.161868  358628 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 21:36:32.161877  358628 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 21:36:32.161881  358628 command_runner.go:130] > # Example:
	I0108 21:36:32.161886  358628 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 21:36:32.161893  358628 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 21:36:32.161898  358628 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 21:36:32.161905  358628 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 21:36:32.161909  358628 command_runner.go:130] > # cpuset = 0
	I0108 21:36:32.161915  358628 command_runner.go:130] > # cpushares = "0-1"
	I0108 21:36:32.161919  358628 command_runner.go:130] > # Where:
	I0108 21:36:32.161926  358628 command_runner.go:130] > # The workload name is workload-type.
	I0108 21:36:32.161933  358628 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 21:36:32.161941  358628 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 21:36:32.161946  358628 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 21:36:32.161954  358628 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 21:36:32.161964  358628 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 21:36:32.161971  358628 command_runner.go:130] > # 
	I0108 21:36:32.161980  358628 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 21:36:32.161986  358628 command_runner.go:130] > #
	I0108 21:36:32.161992  358628 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 21:36:32.162000  358628 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 21:36:32.162006  358628 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 21:36:32.162014  358628 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 21:36:32.162023  358628 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 21:36:32.162027  358628 command_runner.go:130] > [crio.image]
	I0108 21:36:32.162033  358628 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 21:36:32.162037  358628 command_runner.go:130] > # default_transport = "docker://"
	I0108 21:36:32.162045  358628 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 21:36:32.162054  358628 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:36:32.162060  358628 command_runner.go:130] > # global_auth_file = ""
	I0108 21:36:32.162065  358628 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 21:36:32.162072  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:36:32.162077  358628 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 21:36:32.162088  358628 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 21:36:32.162096  358628 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:36:32.162101  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:36:32.162109  358628 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 21:36:32.162115  358628 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 21:36:32.162121  358628 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 21:36:32.162129  358628 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 21:36:32.162135  358628 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 21:36:32.162139  358628 command_runner.go:130] > # pause_command = "/pause"
	I0108 21:36:32.162150  358628 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 21:36:32.162161  358628 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 21:36:32.162169  358628 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 21:36:32.162175  358628 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 21:36:32.162182  358628 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 21:36:32.162187  358628 command_runner.go:130] > # signature_policy = ""
	I0108 21:36:32.162192  358628 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 21:36:32.162198  358628 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 21:36:32.162202  358628 command_runner.go:130] > # changing them here.
	I0108 21:36:32.162208  358628 command_runner.go:130] > # insecure_registries = [
	I0108 21:36:32.162211  358628 command_runner.go:130] > # ]
	I0108 21:36:32.162217  358628 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 21:36:32.162222  358628 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 21:36:32.162226  358628 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 21:36:32.162231  358628 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 21:36:32.162235  358628 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 21:36:32.162240  358628 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 21:36:32.162244  358628 command_runner.go:130] > # CNI plugins.
	I0108 21:36:32.162247  358628 command_runner.go:130] > [crio.network]
	I0108 21:36:32.162253  358628 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 21:36:32.162258  358628 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 21:36:32.162262  358628 command_runner.go:130] > # cni_default_network = ""
	I0108 21:36:32.162267  358628 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 21:36:32.162272  358628 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 21:36:32.162277  358628 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 21:36:32.162280  358628 command_runner.go:130] > # plugin_dirs = [
	I0108 21:36:32.162284  358628 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 21:36:32.162289  358628 command_runner.go:130] > # ]
	I0108 21:36:32.162297  358628 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 21:36:32.162301  358628 command_runner.go:130] > [crio.metrics]
	I0108 21:36:32.162306  358628 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 21:36:32.162312  358628 command_runner.go:130] > enable_metrics = true
	I0108 21:36:32.162317  358628 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 21:36:32.162321  358628 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 21:36:32.162327  358628 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 21:36:32.162338  358628 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 21:36:32.162343  358628 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 21:36:32.162348  358628 command_runner.go:130] > # metrics_collectors = [
	I0108 21:36:32.162352  358628 command_runner.go:130] > # 	"operations",
	I0108 21:36:32.162359  358628 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 21:36:32.162363  358628 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 21:36:32.162367  358628 command_runner.go:130] > # 	"operations_errors",
	I0108 21:36:32.162372  358628 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 21:36:32.162382  358628 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 21:36:32.162386  358628 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 21:36:32.162395  358628 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 21:36:32.162402  358628 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 21:36:32.162406  358628 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 21:36:32.162412  358628 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 21:36:32.162416  358628 command_runner.go:130] > # 	"containers_oom_total",
	I0108 21:36:32.162420  358628 command_runner.go:130] > # 	"containers_oom",
	I0108 21:36:32.162426  358628 command_runner.go:130] > # 	"processes_defunct",
	I0108 21:36:32.162430  358628 command_runner.go:130] > # 	"operations_total",
	I0108 21:36:32.162434  358628 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 21:36:32.162441  358628 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 21:36:32.162445  358628 command_runner.go:130] > # 	"operations_errors_total",
	I0108 21:36:32.162452  358628 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 21:36:32.162457  358628 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 21:36:32.162462  358628 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 21:36:32.162467  358628 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 21:36:32.162473  358628 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 21:36:32.162477  358628 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 21:36:32.162483  358628 command_runner.go:130] > # ]
	I0108 21:36:32.162490  358628 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 21:36:32.162496  358628 command_runner.go:130] > # metrics_port = 9090
	I0108 21:36:32.162501  358628 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 21:36:32.162506  358628 command_runner.go:130] > # metrics_socket = ""
	I0108 21:36:32.162511  358628 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 21:36:32.162519  358628 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 21:36:32.162525  358628 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 21:36:32.162532  358628 command_runner.go:130] > # certificate on any modification event.
	I0108 21:36:32.162536  358628 command_runner.go:130] > # metrics_cert = ""
	I0108 21:36:32.162543  358628 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 21:36:32.162548  358628 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 21:36:32.162554  358628 command_runner.go:130] > # metrics_key = ""
	I0108 21:36:32.162560  358628 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 21:36:32.162566  358628 command_runner.go:130] > [crio.tracing]
	I0108 21:36:32.162571  358628 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 21:36:32.162576  358628 command_runner.go:130] > # enable_tracing = false
	I0108 21:36:32.162586  358628 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 21:36:32.162593  358628 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 21:36:32.162601  358628 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 21:36:32.162608  358628 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 21:36:32.162614  358628 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 21:36:32.162620  358628 command_runner.go:130] > [crio.stats]
	I0108 21:36:32.162625  358628 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 21:36:32.162630  358628 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 21:36:32.162637  358628 command_runner.go:130] > # stats_collection_period = 0
	I0108 21:36:32.162664  358628 command_runner.go:130] ! time="2024-01-08 21:36:32.103489350Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 21:36:32.162676  358628 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 21:36:32.162751  358628 cni.go:84] Creating CNI manager for ""
	I0108 21:36:32.162761  358628 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:36:32.162782  358628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:36:32.162805  358628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-962345 NodeName:multinode-962345 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:36:32.162936  358628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-962345"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:36:32.163007  358628 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-962345 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:36:32.163057  358628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:36:32.172192  358628 command_runner.go:130] > kubeadm
	I0108 21:36:32.172212  358628 command_runner.go:130] > kubectl
	I0108 21:36:32.172225  358628 command_runner.go:130] > kubelet
	I0108 21:36:32.172314  358628 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:36:32.172392  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:36:32.180996  358628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0108 21:36:32.197951  358628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:36:32.213746  358628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0108 21:36:32.230851  358628 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0108 21:36:32.234793  358628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:36:32.247151  358628 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345 for IP: 192.168.39.239
	I0108 21:36:32.247193  358628 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:32.247384  358628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 21:36:32.247434  358628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 21:36:32.247580  358628 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key
	I0108 21:36:32.247666  358628 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key.4bd9216f
	I0108 21:36:32.247726  358628 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.key
	I0108 21:36:32.247745  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 21:36:32.247772  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 21:36:32.247790  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 21:36:32.247807  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 21:36:32.247829  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:36:32.247843  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:36:32.247855  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:36:32.247869  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:36:32.247944  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 21:36:32.247971  358628 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 21:36:32.247981  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:36:32.248007  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:36:32.248031  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:36:32.248059  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 21:36:32.248112  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:36:32.248144  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem -> /usr/share/ca-certificates/341982.pem
	I0108 21:36:32.248157  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /usr/share/ca-certificates/3419822.pem
	I0108 21:36:32.248169  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:36:32.248907  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:36:32.273075  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:36:32.298644  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:36:32.322582  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:36:32.346924  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:36:32.370329  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:36:32.396423  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:36:32.420457  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:36:32.444131  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 21:36:32.466452  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 21:36:32.490237  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:36:32.513144  358628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:36:32.529593  358628 ssh_runner.go:195] Run: openssl version
	I0108 21:36:32.534841  358628 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:36:32.535227  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 21:36:32.546670  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 21:36:32.551302  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:36:32.551419  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:36:32.551478  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 21:36:32.557253  358628 command_runner.go:130] > 51391683
	I0108 21:36:32.557479  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 21:36:32.567610  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 21:36:32.577814  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 21:36:32.582413  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:36:32.582476  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:36:32.582526  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 21:36:32.587969  358628 command_runner.go:130] > 3ec20f2e
	I0108 21:36:32.588411  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:36:32.598596  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:36:32.609010  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:36:32.614208  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:36:32.614331  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:36:32.614382  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:36:32.620140  358628 command_runner.go:130] > b5213941
	I0108 21:36:32.620294  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:36:32.630712  358628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:36:32.635644  358628 command_runner.go:130] > ca.crt
	I0108 21:36:32.635665  358628 command_runner.go:130] > ca.key
	I0108 21:36:32.635670  358628 command_runner.go:130] > healthcheck-client.crt
	I0108 21:36:32.635674  358628 command_runner.go:130] > healthcheck-client.key
	I0108 21:36:32.635679  358628 command_runner.go:130] > peer.crt
	I0108 21:36:32.635683  358628 command_runner.go:130] > peer.key
	I0108 21:36:32.635688  358628 command_runner.go:130] > server.crt
	I0108 21:36:32.635695  358628 command_runner.go:130] > server.key
	I0108 21:36:32.635840  358628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 21:36:32.641794  358628 command_runner.go:130] > Certificate will not expire
	I0108 21:36:32.641868  358628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 21:36:32.647851  358628 command_runner.go:130] > Certificate will not expire
	I0108 21:36:32.648237  358628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 21:36:32.654687  358628 command_runner.go:130] > Certificate will not expire
	I0108 21:36:32.655000  358628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 21:36:32.660766  358628 command_runner.go:130] > Certificate will not expire
	I0108 21:36:32.660824  358628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 21:36:32.666529  358628 command_runner.go:130] > Certificate will not expire
	I0108 21:36:32.666760  358628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 21:36:32.672388  358628 command_runner.go:130] > Certificate will not expire
	I0108 21:36:32.672440  358628 kubeadm.go:404] StartCluster: {Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:36:32.672550  358628 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:36:32.672609  358628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:36:32.712899  358628 cri.go:89] found id: ""
	I0108 21:36:32.713017  358628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:36:32.723335  358628 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0108 21:36:32.723380  358628 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0108 21:36:32.723389  358628 command_runner.go:130] > /var/lib/minikube/etcd:
	I0108 21:36:32.723395  358628 command_runner.go:130] > member
	I0108 21:36:32.723417  358628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 21:36:32.723431  358628 kubeadm.go:636] restartCluster start
	I0108 21:36:32.723520  358628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:36:32.732315  358628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:32.732977  358628 kubeconfig.go:92] found "multinode-962345" server: "https://192.168.39.239:8443"
	I0108 21:36:32.733409  358628 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:36:32.733661  358628 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:36:32.734374  358628 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:36:32.734925  358628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:36:32.744134  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:32.744188  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:32.755868  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:33.244348  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:33.244453  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:33.256704  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:33.744280  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:33.744453  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:33.755894  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:34.244408  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:34.244505  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:34.255979  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:34.745065  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:34.745146  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:34.756259  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:35.244520  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:35.244615  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:35.255817  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:35.744373  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:35.744451  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:35.756148  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:36.244744  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:36.244854  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:36.257127  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:36.744971  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:36.745061  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:36.756948  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:37.244494  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:37.244624  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:37.256148  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:37.744635  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:37.744755  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:37.756198  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:38.244754  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:38.244870  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:38.256111  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:38.744314  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:38.744486  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:38.755179  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:39.244741  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:39.244832  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:39.256145  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:39.744166  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:39.744256  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:39.755252  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:40.244519  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:40.244645  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:40.255402  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:40.744948  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:40.745042  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:40.756368  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:41.244886  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:41.244989  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:41.255765  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:41.744278  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:41.744410  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:41.755104  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:42.244623  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:42.244708  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:42.256902  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:42.744511  358628 api_server.go:166] Checking apiserver status ...
	I0108 21:36:42.744624  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:42.755995  358628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:42.756027  358628 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 21:36:42.756040  358628 kubeadm.go:1135] stopping kube-system containers ...
	I0108 21:36:42.756054  358628 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 21:36:42.756117  358628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:36:42.792445  358628 cri.go:89] found id: ""
	I0108 21:36:42.792527  358628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:36:42.807303  358628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:42.815764  358628 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 21:36:42.816190  358628 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 21:36:42.816517  358628 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 21:36:42.816807  358628 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.817332  358628 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:42.817384  358628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.825594  358628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:42.825615  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:42.946774  358628 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:36:42.946801  358628 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 21:36:42.946832  358628 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 21:36:42.946846  358628 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:36:42.946856  358628 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0108 21:36:42.946878  358628 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:36:42.946891  358628 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0108 21:36:42.946901  358628 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0108 21:36:42.946916  358628 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:36:42.946930  358628 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:36:42.946941  358628 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:36:42.946957  358628 command_runner.go:130] > [certs] Using the existing "sa" key
	I0108 21:36:42.946992  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:43.481646  358628 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:36:43.481675  358628 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:36:43.481682  358628 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:36:43.481692  358628 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:36:43.481705  358628 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:36:43.481745  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:43.661136  358628 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:36:43.661178  358628 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:36:43.661187  358628 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:36:43.661220  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:43.736628  358628 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:36:43.736662  358628 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:36:43.736672  358628 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:36:43.736684  358628 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:36:43.736883  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:43.812471  358628 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:36:43.812529  358628 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:36:43.812611  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:44.313230  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:44.813678  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:45.313286  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:45.813264  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:46.312982  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:46.812951  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:46.835766  358628 command_runner.go:130] > 1126
	I0108 21:36:46.836056  358628 api_server.go:72] duration metric: took 3.023521137s to wait for apiserver process to appear ...
	I0108 21:36:46.836079  358628 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:36:46.836098  358628 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0108 21:36:50.554110  358628 api_server.go:279] https://192.168.39.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:36:50.554140  358628 api_server.go:103] status: https://192.168.39.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:36:50.554154  358628 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0108 21:36:50.626931  358628 api_server.go:279] https://192.168.39.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:36:50.626973  358628 api_server.go:103] status: https://192.168.39.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:36:50.836217  358628 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0108 21:36:50.843303  358628 api_server.go:279] https://192.168.39.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:36:50.843338  358628 api_server.go:103] status: https://192.168.39.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:36:51.337151  358628 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0108 21:36:51.342413  358628 api_server.go:279] https://192.168.39.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:36:51.342441  358628 api_server.go:103] status: https://192.168.39.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:36:51.837201  358628 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0108 21:36:51.842374  358628 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0108 21:36:51.842448  358628 round_trippers.go:463] GET https://192.168.39.239:8443/version
	I0108 21:36:51.842453  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:51.842462  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:51.842471  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:51.853868  358628 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0108 21:36:51.853899  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:51.853916  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:51 GMT
	I0108 21:36:51.853924  358628 round_trippers.go:580]     Audit-Id: 2b78dd29-2031-41f0-b3c8-358da48aed5a
	I0108 21:36:51.853939  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:51.853947  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:51.853954  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:51.853962  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:51.853971  358628 round_trippers.go:580]     Content-Length: 264
	I0108 21:36:51.854004  358628 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 21:36:51.854103  358628 api_server.go:141] control plane version: v1.28.4
	I0108 21:36:51.854128  358628 api_server.go:131] duration metric: took 5.018042059s to wait for apiserver health ...
	I0108 21:36:51.854139  358628 cni.go:84] Creating CNI manager for ""
	I0108 21:36:51.854147  358628 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:36:51.856185  358628 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:36:51.857782  358628 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:36:51.873895  358628 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:36:51.873928  358628 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:36:51.873938  358628 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:36:51.873946  358628 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:36:51.873955  358628 command_runner.go:130] > Access: 2024-01-08 21:36:17.412212418 +0000
	I0108 21:36:51.873962  358628 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0108 21:36:51.873970  358628 command_runner.go:130] > Change: 2024-01-08 21:36:15.543212418 +0000
	I0108 21:36:51.873977  358628 command_runner.go:130] >  Birth: -
	I0108 21:36:51.874243  358628 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:36:51.874266  358628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:36:51.919399  358628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:36:53.124009  358628 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:36:53.130092  358628 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:36:53.135664  358628 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:36:53.150338  358628 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:36:53.152958  358628 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.233499928s)
	I0108 21:36:53.152993  358628 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:36:53.153103  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:36:53.153113  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.153124  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.153138  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.157373  358628 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:36:53.157393  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.157403  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.157410  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.157417  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.157425  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.157432  358628 round_trippers.go:580]     Audit-Id: 0f0a9daa-5383-4eb3-9c17-291cf3d3915b
	I0108 21:36:53.157440  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.161542  358628 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"755"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82646 chars]
	I0108 21:36:53.167801  358628 system_pods.go:59] 12 kube-system pods found
	I0108 21:36:53.167851  358628 system_pods.go:61] "coredns-5dd5756b68-v6dmd" [9c1edff2-3b29-4045-b7b9-935c47115d16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 21:36:53.167864  358628 system_pods.go:61] "etcd-multinode-962345" [44773ce7-5393-4178-a985-d8bf216f88f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:36:53.167879  358628 system_pods.go:61] "kindnet-5w9nh" [b84fc0ee-c9b1-4e6c-b066-536f2fd56d52] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:36:53.167888  358628 system_pods.go:61] "kindnet-mvv2x" [74892ac7-d01b-459d-8faf-b3a774b7b190] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:36:53.167898  358628 system_pods.go:61] "kindnet-psmlz" [4bcadd03-9934-4b8e-b732-6e1c97265ff7] Running
	I0108 21:36:53.167916  358628 system_pods.go:61] "kube-apiserver-multinode-962345" [bea03251-08df-4434-bc4a-36ef454e151e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:36:53.167929  358628 system_pods.go:61] "kube-controller-manager-multinode-962345" [80b86d62-83f0-4550-988f-6255409d39da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:36:53.167940  358628 system_pods.go:61] "kube-proxy-2c2t6" [4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e] Running
	I0108 21:36:53.167953  358628 system_pods.go:61] "kube-proxy-bmjzs" [fbfa39a4-ba62-4e31-8126-9a320311e846] Running
	I0108 21:36:53.167961  358628 system_pods.go:61] "kube-proxy-cpq6p" [52634211-9ecd-4fd9-a8ce-88f67c668e75] Running
	I0108 21:36:53.167970  358628 system_pods.go:61] "kube-scheduler-multinode-962345" [3778c0a4-1528-4336-9f02-b77a2a6a1c34] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:36:53.167984  358628 system_pods.go:61] "storage-provisioner" [da89492c-e129-462d-b84e-2f4a10085550] Running
	I0108 21:36:53.167993  358628 system_pods.go:74] duration metric: took 14.992937ms to wait for pod list to return data ...
	I0108 21:36:53.168005  358628 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:36:53.168114  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes
	I0108 21:36:53.168131  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.168141  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.168156  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.171242  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:53.171261  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.171270  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.171277  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.171285  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.171292  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.171299  358628 round_trippers.go:580]     Audit-Id: 9cca2d3f-9bac-49e5-9b0c-a5053b9315ca
	I0108 21:36:53.171315  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.172407  358628 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"755"},"items":[{"metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16356 chars]
	I0108 21:36:53.173186  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:36:53.173213  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:36:53.173227  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:36:53.173234  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:36:53.173240  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:36:53.173249  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:36:53.173257  358628 node_conditions.go:105] duration metric: took 5.242047ms to run NodePressure ...
	I0108 21:36:53.173282  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:53.392797  358628 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 21:36:53.392833  358628 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 21:36:53.392871  358628 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 21:36:53.393063  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0108 21:36:53.393082  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.393093  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.393102  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.396201  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:53.396221  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.396228  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.396233  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.396238  358628 round_trippers.go:580]     Audit-Id: 08e40fc5-e828-4837-bfad-484db5d74955
	I0108 21:36:53.396243  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.396248  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.396258  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.398103  358628 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"757"},"items":[{"metadata":{"name":"etcd-multinode-962345","namespace":"kube-system","uid":"44773ce7-5393-4178-a985-d8bf216f88f1","resourceVersion":"746","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.239:2379","kubernetes.io/config.hash":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.mirror":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.seen":"2024-01-08T21:26:26.755438257Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0108 21:36:53.399191  358628 kubeadm.go:787] kubelet initialised
	I0108 21:36:53.399212  358628 kubeadm.go:788] duration metric: took 6.331829ms waiting for restarted kubelet to initialise ...
	I0108 21:36:53.399220  358628 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:53.399297  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:36:53.399306  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.399313  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.399328  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.405462  358628 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 21:36:53.405484  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.405494  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.405503  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.405511  358628 round_trippers.go:580]     Audit-Id: 5470ed89-1dea-4e1f-b23d-05caf362d510
	I0108 21:36:53.405531  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.405542  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.405553  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.408950  358628 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"757"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82646 chars]
	I0108 21:36:53.411858  358628 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:53.411959  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:53.411968  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.411975  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.411981  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.414353  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:53.414375  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.414385  358628 round_trippers.go:580]     Audit-Id: de4d6551-2709-40df-bf9c-568e5c4baaef
	I0108 21:36:53.414393  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.414402  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.414413  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.414423  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.414431  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.414591  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:53.414976  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:53.414990  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.415000  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.415006  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.416830  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:36:53.416851  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.416860  358628 round_trippers.go:580]     Audit-Id: 3984c2dd-eb5f-42b8-a213-62b98f4a2eb4
	I0108 21:36:53.416868  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.416876  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.416885  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.416892  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.416903  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.417058  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 21:36:53.417486  358628 pod_ready.go:97] node "multinode-962345" hosting pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:53.417517  358628 pod_ready.go:81] duration metric: took 5.634654ms waiting for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:53.417531  358628 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-962345" hosting pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:53.417544  358628 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:53.417617  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-962345
	I0108 21:36:53.417629  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.417639  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.417649  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.419266  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:36:53.419285  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.419293  358628 round_trippers.go:580]     Audit-Id: 36619c4a-7de5-4611-a043-a2d4e06d5b4f
	I0108 21:36:53.419302  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.419310  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.419321  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.419332  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.419349  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.419658  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-962345","namespace":"kube-system","uid":"44773ce7-5393-4178-a985-d8bf216f88f1","resourceVersion":"746","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.239:2379","kubernetes.io/config.hash":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.mirror":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.seen":"2024-01-08T21:26:26.755438257Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0108 21:36:53.419982  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:53.419993  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.420000  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.420011  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.421811  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:36:53.421828  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.421836  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.421843  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.421851  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.421873  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.421886  358628 round_trippers.go:580]     Audit-Id: 89d53b26-0af7-43b1-9589-efad5bbfd1e3
	I0108 21:36:53.421895  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.422223  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 21:36:53.422538  358628 pod_ready.go:97] node "multinode-962345" hosting pod "etcd-multinode-962345" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:53.422563  358628 pod_ready.go:81] duration metric: took 5.004704ms waiting for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:53.422574  358628 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-962345" hosting pod "etcd-multinode-962345" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:53.422588  358628 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:53.422641  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-962345
	I0108 21:36:53.422652  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.422662  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.422672  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.425258  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:53.425279  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.425288  358628 round_trippers.go:580]     Audit-Id: f262965b-a24b-493b-b0f7-e0d75e6584c9
	I0108 21:36:53.425297  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.425304  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.425312  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.425323  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.425333  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.425566  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-962345","namespace":"kube-system","uid":"bea03251-08df-4434-bc4a-36ef454e151e","resourceVersion":"747","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.239:8443","kubernetes.io/config.hash":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.mirror":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.seen":"2024-01-08T21:26:26.755439577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0108 21:36:53.426030  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:53.426047  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.426058  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.426074  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.428593  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:53.428610  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.428619  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.428627  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.428636  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.428651  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.428664  358628 round_trippers.go:580]     Audit-Id: 8e36313e-4de9-47cf-948f-f5973b5f3f13
	I0108 21:36:53.428675  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.428801  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 21:36:53.429075  358628 pod_ready.go:97] node "multinode-962345" hosting pod "kube-apiserver-multinode-962345" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:53.429092  358628 pod_ready.go:81] duration metric: took 6.497654ms waiting for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:53.429102  358628 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-962345" hosting pod "kube-apiserver-multinode-962345" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:53.429112  358628 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:53.429175  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-962345
	I0108 21:36:53.429184  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.429194  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.429205  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.432673  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:53.432686  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.432692  358628 round_trippers.go:580]     Audit-Id: bfc80a84-a128-436c-9e24-0ba0a250671f
	I0108 21:36:53.432698  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.432703  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.432708  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.432719  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.432728  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.432929  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-962345","namespace":"kube-system","uid":"80b86d62-83f0-4550-988f-6255409d39da","resourceVersion":"748","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.mirror":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.seen":"2024-01-08T21:26:26.755427365Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0108 21:36:53.553626  358628 request.go:629] Waited for 120.297946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:53.553722  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:53.553735  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.553743  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.553756  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.556325  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:53.556350  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.556363  358628 round_trippers.go:580]     Audit-Id: bafbc5b6-d069-424f-afbb-3edd040011c5
	I0108 21:36:53.556371  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.556380  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.556388  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.556398  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.556425  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.556574  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 21:36:53.557155  358628 pod_ready.go:97] node "multinode-962345" hosting pod "kube-controller-manager-multinode-962345" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:53.557221  358628 pod_ready.go:81] duration metric: took 128.064371ms waiting for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:53.557242  358628 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-962345" hosting pod "kube-controller-manager-multinode-962345" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:53.557258  358628 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:53.753738  358628 request.go:629] Waited for 196.387629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:36:53.753850  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:36:53.753864  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.753876  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.753889  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.756833  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:53.756862  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.756872  358628 round_trippers.go:580]     Audit-Id: 166d46bd-bb70-4441-8ace-a8b01bf91019
	I0108 21:36:53.756879  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.756886  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.756894  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.756902  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.756911  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.757084  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2c2t6","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e","resourceVersion":"506","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 21:36:53.953141  358628 request.go:629] Waited for 195.304873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:36:53.953221  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:36:53.953226  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:53.953234  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:53.953241  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:53.955585  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:53.955611  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:53.955622  358628 round_trippers.go:580]     Audit-Id: ce6f8042-4fc0-4fe3-9c12-4637197e957b
	I0108 21:36:53.955631  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:53.955640  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:53.955650  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:53.955660  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:53.955670  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:53 GMT
	I0108 21:36:53.955954  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"739","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0108 21:36:53.956334  358628 pod_ready.go:92] pod "kube-proxy-2c2t6" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:53.956356  358628 pod_ready.go:81] duration metric: took 399.084624ms waiting for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:53.956368  358628 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:54.153356  358628 request.go:629] Waited for 196.916294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:36:54.153468  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:36:54.153478  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:54.153486  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:54.153493  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:54.156303  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:54.156326  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:54.156334  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:54.156339  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:54 GMT
	I0108 21:36:54.156344  358628 round_trippers.go:580]     Audit-Id: 4c8fab5c-f7fa-4a18-9a25-08532d4885b1
	I0108 21:36:54.156353  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:54.156361  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:54.156373  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:54.156828  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmjzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"fbfa39a4-ba62-4e31-8126-9a320311e846","resourceVersion":"754","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 21:36:54.353815  358628 request.go:629] Waited for 196.441138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:54.353896  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:54.353904  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:54.353915  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:54.353970  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:54.358709  358628 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:36:54.358737  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:54.358747  358628 round_trippers.go:580]     Audit-Id: 49d26e06-4dab-4cfa-9ba8-4b552130b288
	I0108 21:36:54.358756  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:54.358768  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:54.358779  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:54.358790  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:54.358801  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:54 GMT
	I0108 21:36:54.358992  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 21:36:54.359525  358628 pod_ready.go:97] node "multinode-962345" hosting pod "kube-proxy-bmjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:54.359555  358628 pod_ready.go:81] duration metric: took 403.17726ms waiting for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:54.359569  358628 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-962345" hosting pod "kube-proxy-bmjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:54.359582  358628 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cpq6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:54.553939  358628 request.go:629] Waited for 194.277032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cpq6p
	I0108 21:36:54.554016  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cpq6p
	I0108 21:36:54.554024  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:54.554035  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:54.554045  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:54.557978  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:54.558011  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:54.558030  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:54.558054  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:54.558063  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:54 GMT
	I0108 21:36:54.558076  358628 round_trippers.go:580]     Audit-Id: f3ae451f-ace9-471d-b2c5-255b6d3e5457
	I0108 21:36:54.558084  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:54.558095  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:54.558956  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cpq6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"52634211-9ecd-4fd9-a8ce-88f67c668e75","resourceVersion":"717","creationTimestamp":"2024-01-08T21:28:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 21:36:54.754006  358628 request.go:629] Waited for 194.414579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:36:54.754088  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:36:54.754094  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:54.754105  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:54.754113  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:54.756641  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:54.756676  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:54.756683  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:54.756690  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:54.756699  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:54 GMT
	I0108 21:36:54.756707  358628 round_trippers.go:580]     Audit-Id: ab08dc37-47a2-48c7-a85e-dab28d6d8ad9
	I0108 21:36:54.756725  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:54.756737  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:54.756911  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m03","uid":"d31cb22f-3104-4da9-bd90-2f7e1fa3889a","resourceVersion":"740","creationTimestamp":"2024-01-08T21:28:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0108 21:36:54.757212  358628 pod_ready.go:92] pod "kube-proxy-cpq6p" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:54.757233  358628 pod_ready.go:81] duration metric: took 397.639146ms waiting for pod "kube-proxy-cpq6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:54.757246  358628 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:54.954144  358628 request.go:629] Waited for 196.818921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:36:54.954236  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:36:54.954248  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:54.954256  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:54.954263  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:54.957594  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:54.957622  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:54.957633  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:54 GMT
	I0108 21:36:54.957642  358628 round_trippers.go:580]     Audit-Id: fe8fc863-9945-4488-829d-2a32d4844a0a
	I0108 21:36:54.957651  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:54.957659  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:54.957667  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:54.957675  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:54.957813  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-962345","namespace":"kube-system","uid":"3778c0a4-1528-4336-9f02-b77a2a6a1c34","resourceVersion":"743","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.mirror":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.seen":"2024-01-08T21:26:26.755431609Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0108 21:36:55.153603  358628 request.go:629] Waited for 195.386277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:55.153702  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:55.153714  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:55.153728  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:55.153743  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:55.157111  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:55.157133  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:55.157142  358628 round_trippers.go:580]     Audit-Id: ebe53e94-3533-4824-8106-fecf4eb0469b
	I0108 21:36:55.157150  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:55.157158  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:55.157165  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:55.157172  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:55.157179  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:55 GMT
	I0108 21:36:55.157499  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 21:36:55.157850  358628 pod_ready.go:97] node "multinode-962345" hosting pod "kube-scheduler-multinode-962345" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:55.157871  358628 pod_ready.go:81] duration metric: took 400.617799ms waiting for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:55.157885  358628 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-962345" hosting pod "kube-scheduler-multinode-962345" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-962345" has status "Ready":"False"
	I0108 21:36:55.157896  358628 pod_ready.go:38] duration metric: took 1.75866376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:55.157914  358628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:55.169216  358628 command_runner.go:130] > -16
	I0108 21:36:55.169487  358628 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:55.169500  358628 kubeadm.go:640] restartCluster took 22.446060063s
	I0108 21:36:55.169510  358628 kubeadm.go:406] StartCluster complete in 22.497073296s
	I0108 21:36:55.169540  358628 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:55.169624  358628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:36:55.170539  358628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:55.170806  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:55.170903  358628 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:36:55.173760  358628 out.go:177] * Enabled addons: 
	I0108 21:36:55.171242  358628 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:36:55.171244  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:36:55.174989  358628 addons.go:508] enable addons completed in 4.085489ms: enabled=[]
	I0108 21:36:55.175443  358628 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:36:55.175918  358628 round_trippers.go:463] GET https://192.168.39.239:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:36:55.175940  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:55.175952  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:55.175965  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:55.178395  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:55.178409  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:55.178416  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:55.178421  358628 round_trippers.go:580]     Content-Length: 291
	I0108 21:36:55.178426  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:55 GMT
	I0108 21:36:55.178431  358628 round_trippers.go:580]     Audit-Id: db697b6b-1c0a-456e-b2ae-900f3d2d00a9
	I0108 21:36:55.178436  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:55.178443  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:55.178447  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:55.178484  358628 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9a0db73a-68c0-469b-b860-0baad5e41646","resourceVersion":"756","creationTimestamp":"2024-01-08T21:26:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:36:55.178684  358628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-962345" context rescaled to 1 replicas
	I0108 21:36:55.178713  358628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:36:55.180231  358628 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:55.181396  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:55.286406  358628 command_runner.go:130] > apiVersion: v1
	I0108 21:36:55.286437  358628 command_runner.go:130] > data:
	I0108 21:36:55.286442  358628 command_runner.go:130] >   Corefile: |
	I0108 21:36:55.286446  358628 command_runner.go:130] >     .:53 {
	I0108 21:36:55.286449  358628 command_runner.go:130] >         log
	I0108 21:36:55.286454  358628 command_runner.go:130] >         errors
	I0108 21:36:55.286458  358628 command_runner.go:130] >         health {
	I0108 21:36:55.286463  358628 command_runner.go:130] >            lameduck 5s
	I0108 21:36:55.286466  358628 command_runner.go:130] >         }
	I0108 21:36:55.286471  358628 command_runner.go:130] >         ready
	I0108 21:36:55.286476  358628 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 21:36:55.286480  358628 command_runner.go:130] >            pods insecure
	I0108 21:36:55.286496  358628 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 21:36:55.286502  358628 command_runner.go:130] >            ttl 30
	I0108 21:36:55.286505  358628 command_runner.go:130] >         }
	I0108 21:36:55.286512  358628 command_runner.go:130] >         prometheus :9153
	I0108 21:36:55.286519  358628 command_runner.go:130] >         hosts {
	I0108 21:36:55.286532  358628 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0108 21:36:55.286542  358628 command_runner.go:130] >            fallthrough
	I0108 21:36:55.286548  358628 command_runner.go:130] >         }
	I0108 21:36:55.286559  358628 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 21:36:55.286567  358628 command_runner.go:130] >            max_concurrent 1000
	I0108 21:36:55.286579  358628 command_runner.go:130] >         }
	I0108 21:36:55.286583  358628 command_runner.go:130] >         cache 30
	I0108 21:36:55.286588  358628 command_runner.go:130] >         loop
	I0108 21:36:55.286592  358628 command_runner.go:130] >         reload
	I0108 21:36:55.286596  358628 command_runner.go:130] >         loadbalance
	I0108 21:36:55.286600  358628 command_runner.go:130] >     }
	I0108 21:36:55.286608  358628 command_runner.go:130] > kind: ConfigMap
	I0108 21:36:55.286611  358628 command_runner.go:130] > metadata:
	I0108 21:36:55.286616  358628 command_runner.go:130] >   creationTimestamp: "2024-01-08T21:26:26Z"
	I0108 21:36:55.286622  358628 command_runner.go:130] >   name: coredns
	I0108 21:36:55.286626  358628 command_runner.go:130] >   namespace: kube-system
	I0108 21:36:55.286631  358628 command_runner.go:130] >   resourceVersion: "396"
	I0108 21:36:55.286636  358628 command_runner.go:130] >   uid: 40588f70-e960-47a7-b449-3780d271733d
	I0108 21:36:55.286727  358628 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 21:36:55.286718  358628 node_ready.go:35] waiting up to 6m0s for node "multinode-962345" to be "Ready" ...
	I0108 21:36:55.354117  358628 request.go:629] Waited for 67.252015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:55.354191  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:55.354196  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:55.354204  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:55.354210  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:55.357114  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:55.357141  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:55.357152  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:55 GMT
	I0108 21:36:55.357161  358628 round_trippers.go:580]     Audit-Id: 2fde5c32-46ec-4c03-9341-36e5fed90ec2
	I0108 21:36:55.357169  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:55.357182  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:55.357198  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:55.357207  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:55.357427  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 21:36:55.786949  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:55.786975  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:55.786984  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:55.786999  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:55.792252  358628 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 21:36:55.792281  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:55.792293  358628 round_trippers.go:580]     Audit-Id: cff0f00d-2471-4896-bc13-0c81eb2abff6
	I0108 21:36:55.792302  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:55.792310  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:55.792320  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:55.792328  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:55.792336  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:55 GMT
	I0108 21:36:55.792498  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"742","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 21:36:56.287000  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:56.287031  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:56.287040  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:56.287048  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:56.289823  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:56.289844  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:56.289852  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:56.289857  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:56.289865  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:56.289871  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:56.289876  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:56 GMT
	I0108 21:36:56.289881  358628 round_trippers.go:580]     Audit-Id: dc92910a-a559-48da-9448-38d9e7628e83
	I0108 21:36:56.290143  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:36:56.290479  358628 node_ready.go:49] node "multinode-962345" has status "Ready":"True"
	I0108 21:36:56.290495  358628 node_ready.go:38] duration metric: took 1.003747646s waiting for node "multinode-962345" to be "Ready" ...
	I0108 21:36:56.290505  358628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:56.290562  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:36:56.290570  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:56.290577  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:56.290583  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:56.293956  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:56.293982  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:56.293993  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:56.294001  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:56.294013  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:56 GMT
	I0108 21:36:56.294021  358628 round_trippers.go:580]     Audit-Id: 44192cdc-6579-4df2-8ebd-b3c4b0cb4c15
	I0108 21:36:56.294029  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:56.294038  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:56.295146  358628 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"860"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82965 chars]
	I0108 21:36:56.298605  358628 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:56.298689  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:56.298699  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:56.298706  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:56.298715  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:56.301813  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:56.301832  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:56.301842  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:56 GMT
	I0108 21:36:56.301850  358628 round_trippers.go:580]     Audit-Id: c8647616-ddeb-4313-9a63-58958aab6ee2
	I0108 21:36:56.301857  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:56.301866  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:56.301874  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:56.301884  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:56.302016  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:56.353403  358628 request.go:629] Waited for 50.844857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:56.353479  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:56.353484  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:56.353492  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:56.353498  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:56.356238  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:56.356260  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:56.356266  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:56.356272  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:56.356277  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:56.356285  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:56.356291  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:56 GMT
	I0108 21:36:56.356295  358628 round_trippers.go:580]     Audit-Id: 2d86fcd8-451a-4dfd-a9ce-7b20d246e839
	I0108 21:36:56.356626  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:36:56.799277  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:56.799303  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:56.799311  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:56.799317  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:56.802365  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:56.802391  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:56.802403  358628 round_trippers.go:580]     Audit-Id: 9c444e41-b1c0-4bd9-aec6-18a73e144939
	I0108 21:36:56.802413  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:56.802422  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:56.802428  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:56.802434  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:56.802439  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:56 GMT
	I0108 21:36:56.802657  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:56.803120  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:56.803138  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:56.803145  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:56.803152  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:56.805770  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:56.805787  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:56.805793  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:56 GMT
	I0108 21:36:56.805798  358628 round_trippers.go:580]     Audit-Id: 24dead9f-3015-4a25-9dc3-fa90f9744d80
	I0108 21:36:56.805804  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:56.805812  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:56.805821  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:56.805829  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:56.805936  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:36:57.299631  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:57.299664  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:57.299677  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:57.299686  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:57.302604  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:57.302623  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:57.302636  358628 round_trippers.go:580]     Audit-Id: d75fab59-14d0-48a7-a4ee-aea0d781ec43
	I0108 21:36:57.302642  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:57.302647  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:57.302652  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:57.302657  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:57.302662  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:57 GMT
	I0108 21:36:57.303415  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:57.304019  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:57.304037  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:57.304045  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:57.304054  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:57.306630  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:57.306651  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:57.306672  358628 round_trippers.go:580]     Audit-Id: a4b32bd5-9e73-47a8-8dd9-58e126b613a5
	I0108 21:36:57.306680  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:57.306688  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:57.306696  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:57.306708  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:57.306716  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:57 GMT
	I0108 21:36:57.307229  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:36:57.798907  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:57.798940  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:57.798948  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:57.798954  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:57.801822  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:57.801843  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:57.801850  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:57.801855  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:57 GMT
	I0108 21:36:57.801861  358628 round_trippers.go:580]     Audit-Id: e0aaffe6-d0f8-4e33-8300-6b9eeab8be54
	I0108 21:36:57.801866  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:57.801871  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:57.801885  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:57.802427  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:57.802880  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:57.802892  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:57.802899  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:57.802904  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:57.805408  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:57.805424  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:57.805432  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:57.805441  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:57 GMT
	I0108 21:36:57.805449  358628 round_trippers.go:580]     Audit-Id: 634765bb-45fa-4f10-b160-f52ad17e84cc
	I0108 21:36:57.805458  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:57.805466  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:57.805475  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:57.805620  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:36:58.298904  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:58.298938  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:58.298951  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:58.298959  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:58.301835  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:58.301857  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:58.301864  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:58.301870  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:58 GMT
	I0108 21:36:58.301875  358628 round_trippers.go:580]     Audit-Id: 2835c2f3-cda9-4eb5-91bf-1fd12cf6517b
	I0108 21:36:58.301880  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:58.301885  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:58.301895  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:58.302469  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:58.302945  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:58.302959  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:58.302966  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:58.302972  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:58.306030  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:58.306049  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:58.306059  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:58.306068  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:58.306080  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:58.306088  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:58 GMT
	I0108 21:36:58.306098  358628 round_trippers.go:580]     Audit-Id: ed57a808-20ea-4f87-922e-106776faba6f
	I0108 21:36:58.306110  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:58.306232  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:36:58.306532  358628 pod_ready.go:102] pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:58.798882  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:58.798909  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:58.798921  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:58.798935  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:58.801902  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:58.801940  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:58.801951  358628 round_trippers.go:580]     Audit-Id: 89eaf4b4-c32e-4c0c-beeb-59eaf6b0ab22
	I0108 21:36:58.801960  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:58.801967  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:58.801972  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:58.801979  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:58.801987  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:58 GMT
	I0108 21:36:58.802201  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:58.802880  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:58.802906  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:58.802918  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:58.802930  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:58.811644  358628 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:36:58.811661  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:58.811667  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:58.811674  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:58.811683  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:58.811691  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:58 GMT
	I0108 21:36:58.811699  358628 round_trippers.go:580]     Audit-Id: 3cf71ad6-147e-415a-9f98-59fcfa9a46fe
	I0108 21:36:58.811707  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:58.811993  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:36:59.299709  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:59.299737  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:59.299767  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:59.299778  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:59.304000  358628 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:36:59.304026  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:59.304035  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:59 GMT
	I0108 21:36:59.304043  358628 round_trippers.go:580]     Audit-Id: d3a1e0f2-7f04-45af-aae5-6f7ff112494f
	I0108 21:36:59.304050  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:59.304059  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:59.304069  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:59.304082  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:59.304263  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:59.304759  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:59.304778  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:59.304788  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:59.304796  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:59.306967  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:36:59.306991  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:59.307000  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:59.307008  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:59.307016  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:59.307027  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:59.307035  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:59 GMT
	I0108 21:36:59.307045  358628 round_trippers.go:580]     Audit-Id: daeca870-72c4-4602-8672-f80528e5f074
	I0108 21:36:59.307190  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:36:59.799453  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:36:59.799481  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:59.799493  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:59.799499  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:59.802632  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:36:59.802654  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:59.802664  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:59 GMT
	I0108 21:36:59.802672  358628 round_trippers.go:580]     Audit-Id: ef68beb4-2754-4d9d-a43f-c61bd6ec5e15
	I0108 21:36:59.802679  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:59.802685  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:59.802692  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:59.802699  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:59.803294  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"750","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 21:36:59.803810  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:36:59.803833  358628 round_trippers.go:469] Request Headers:
	I0108 21:36:59.803844  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:36:59.803853  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:36:59.805668  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:36:59.805682  358628 round_trippers.go:577] Response Headers:
	I0108 21:36:59.805689  358628 round_trippers.go:580]     Audit-Id: 76c591e4-e68a-4770-a1ce-b5429e154b7e
	I0108 21:36:59.805699  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:36:59.805704  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:36:59.805709  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:36:59.805723  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:36:59.805731  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:36:59 GMT
	I0108 21:36:59.806140  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:37:00.298795  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:37:00.298827  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.298838  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.298847  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.301819  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:00.301844  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.301855  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.301867  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.301879  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.301890  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.301901  358628 round_trippers.go:580]     Audit-Id: 67ec85ff-2fec-47e1-8fe7-266a456b8bc7
	I0108 21:37:00.301909  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.302134  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"871","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 21:37:00.302738  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:00.302754  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.302762  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.302768  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.305777  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:00.305792  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.305799  358628 round_trippers.go:580]     Audit-Id: 39d99237-bb91-4e67-9957-81a2c944bed5
	I0108 21:37:00.305804  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.305809  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.305814  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.305824  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.305830  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.306460  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:37:00.306812  358628 pod_ready.go:92] pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace has status "Ready":"True"
	I0108 21:37:00.306830  358628 pod_ready.go:81] duration metric: took 4.008202833s waiting for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:00.306843  358628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:00.306913  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-962345
	I0108 21:37:00.306923  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.306933  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.306943  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.320062  358628 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0108 21:37:00.320084  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.320091  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.320097  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.320102  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.320107  358628 round_trippers.go:580]     Audit-Id: 80587078-c7be-4bcb-bf56-1e141c188f54
	I0108 21:37:00.320112  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.320117  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.321049  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-962345","namespace":"kube-system","uid":"44773ce7-5393-4178-a985-d8bf216f88f1","resourceVersion":"864","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.239:2379","kubernetes.io/config.hash":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.mirror":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.seen":"2024-01-08T21:26:26.755438257Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 21:37:00.321486  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:00.321503  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.321511  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.321521  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.324018  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:00.324032  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.324038  358628 round_trippers.go:580]     Audit-Id: 0d352527-38cb-4e4e-adb8-dd8066b25d7e
	I0108 21:37:00.324044  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.324050  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.324062  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.324070  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.324078  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.324461  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:37:00.324802  358628 pod_ready.go:92] pod "etcd-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:37:00.324830  358628 pod_ready.go:81] duration metric: took 17.979107ms waiting for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:00.324846  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:00.324914  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-962345
	I0108 21:37:00.324931  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.324939  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.324949  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.328801  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:37:00.328815  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.328821  358628 round_trippers.go:580]     Audit-Id: c8ec3a41-c105-47e1-871a-d8e3212f7ba3
	I0108 21:37:00.328827  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.328832  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.328837  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.328842  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.328847  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.329414  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-962345","namespace":"kube-system","uid":"bea03251-08df-4434-bc4a-36ef454e151e","resourceVersion":"862","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.239:8443","kubernetes.io/config.hash":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.mirror":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.seen":"2024-01-08T21:26:26.755439577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 21:37:00.354013  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:00.354051  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.354060  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.354071  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.358702  358628 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:37:00.358722  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.358736  358628 round_trippers.go:580]     Audit-Id: fb1908b4-8b1a-4087-bb4c-8cf407c65a7f
	I0108 21:37:00.358745  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.358754  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.358767  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.358778  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.358787  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.359793  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:37:00.360114  358628 pod_ready.go:92] pod "kube-apiserver-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:37:00.360134  358628 pod_ready.go:81] duration metric: took 35.2782ms waiting for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:00.360152  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:00.553630  358628 request.go:629] Waited for 193.386799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-962345
	I0108 21:37:00.553714  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-962345
	I0108 21:37:00.553722  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.553730  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.553737  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.557665  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:37:00.557687  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.557698  358628 round_trippers.go:580]     Audit-Id: ca75e4d0-f1a0-4819-be29-6db5f2caae6b
	I0108 21:37:00.557724  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.557735  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.557744  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.557754  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.557769  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.558326  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-962345","namespace":"kube-system","uid":"80b86d62-83f0-4550-988f-6255409d39da","resourceVersion":"865","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.mirror":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.seen":"2024-01-08T21:26:26.755427365Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 21:37:00.754145  358628 request.go:629] Waited for 195.363523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:00.754225  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:00.754233  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.754244  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.754257  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.757879  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:37:00.757903  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.757911  358628 round_trippers.go:580]     Audit-Id: 92d59272-0294-4d76-8d1a-73a80204fc4e
	I0108 21:37:00.757916  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.757927  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.757933  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.757939  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.757948  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.758304  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:37:00.758638  358628 pod_ready.go:92] pod "kube-controller-manager-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:37:00.758659  358628 pod_ready.go:81] duration metric: took 398.493912ms waiting for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:00.758674  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:00.954008  358628 request.go:629] Waited for 195.214469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:37:00.954077  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:37:00.954083  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:00.954092  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:00.954099  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:00.956472  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:00.956501  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:00.956511  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:00.956519  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:00 GMT
	I0108 21:37:00.956527  358628 round_trippers.go:580]     Audit-Id: 38ec8309-ba4a-4ff0-829e-ba232ffc690b
	I0108 21:37:00.956547  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:00.956559  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:00.956568  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:00.957073  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2c2t6","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e","resourceVersion":"506","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 21:37:01.153935  358628 request.go:629] Waited for 196.412826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:37:01.154034  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:37:01.154040  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:01.154048  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:01.154055  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:01.156749  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:01.156776  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:01.156786  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:01.156795  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:01 GMT
	I0108 21:37:01.156803  358628 round_trippers.go:580]     Audit-Id: 63382c69-b4f8-4988-bfcd-5e4f3c4fbafc
	I0108 21:37:01.156810  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:01.156823  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:01.156831  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:01.157006  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73","resourceVersion":"739","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0108 21:37:01.157318  358628 pod_ready.go:92] pod "kube-proxy-2c2t6" in "kube-system" namespace has status "Ready":"True"
	I0108 21:37:01.157366  358628 pod_ready.go:81] duration metric: took 398.682801ms waiting for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:01.157383  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:01.353641  358628 request.go:629] Waited for 196.169458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:37:01.353731  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:37:01.353739  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:01.353768  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:01.353783  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:01.356920  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:37:01.356943  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:01.356950  358628 round_trippers.go:580]     Audit-Id: 970626ee-9c52-4179-b0a7-01221ecb7113
	I0108 21:37:01.356955  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:01.356974  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:01.356980  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:01.356985  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:01.356993  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:01 GMT
	I0108 21:37:01.357632  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmjzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"fbfa39a4-ba62-4e31-8126-9a320311e846","resourceVersion":"754","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 21:37:01.553851  358628 request.go:629] Waited for 195.767866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:01.553934  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:01.553942  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:01.553950  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:01.553957  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:01.557000  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:37:01.557024  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:01.557031  358628 round_trippers.go:580]     Audit-Id: f918790d-2a9d-4241-85f8-5965286bb774
	I0108 21:37:01.557037  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:01.557042  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:01.557047  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:01.557053  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:01.557058  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:01 GMT
	I0108 21:37:01.557399  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:37:01.557710  358628 pod_ready.go:92] pod "kube-proxy-bmjzs" in "kube-system" namespace has status "Ready":"True"
	I0108 21:37:01.557732  358628 pod_ready.go:81] duration metric: took 400.33777ms waiting for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:01.557745  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cpq6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:01.753851  358628 request.go:629] Waited for 196.003601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cpq6p
	I0108 21:37:01.753936  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cpq6p
	I0108 21:37:01.753946  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:01.753972  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:01.753981  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:01.758456  358628 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:37:01.758479  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:01.758487  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:01.758492  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:01 GMT
	I0108 21:37:01.758497  358628 round_trippers.go:580]     Audit-Id: e9b465c9-5281-4cdf-825e-afd0942e8cb7
	I0108 21:37:01.758503  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:01.758530  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:01.758535  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:01.759194  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cpq6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"52634211-9ecd-4fd9-a8ce-88f67c668e75","resourceVersion":"717","creationTimestamp":"2024-01-08T21:28:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 21:37:01.954128  358628 request.go:629] Waited for 194.375005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:37:01.954197  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:37:01.954202  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:01.954209  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:01.954215  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:01.956982  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:01.957007  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:01.957018  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:01 GMT
	I0108 21:37:01.957046  358628 round_trippers.go:580]     Audit-Id: 2d536343-1ec3-4ad9-96dd-53baa1cd8699
	I0108 21:37:01.957054  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:01.957064  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:01.957075  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:01.957086  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:01.957699  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m03","uid":"d31cb22f-3104-4da9-bd90-2f7e1fa3889a","resourceVersion":"740","creationTimestamp":"2024-01-08T21:28:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_28_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0108 21:37:01.958074  358628 pod_ready.go:92] pod "kube-proxy-cpq6p" in "kube-system" namespace has status "Ready":"True"
	I0108 21:37:01.958098  358628 pod_ready.go:81] duration metric: took 400.341468ms waiting for pod "kube-proxy-cpq6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:01.958111  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:02.154115  358628 request.go:629] Waited for 195.934004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:37:02.154187  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:37:02.154192  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:02.154200  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:02.154213  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:02.157306  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:37:02.157326  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:02.157333  358628 round_trippers.go:580]     Audit-Id: 9ae62996-3165-4b58-9c56-12a7cc2258dc
	I0108 21:37:02.157339  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:02.157344  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:02.157349  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:02.157354  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:02.157360  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:02 GMT
	I0108 21:37:02.157660  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-962345","namespace":"kube-system","uid":"3778c0a4-1528-4336-9f02-b77a2a6a1c34","resourceVersion":"873","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.mirror":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.seen":"2024-01-08T21:26:26.755431609Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 21:37:02.353354  358628 request.go:629] Waited for 195.301013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:02.353432  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:37:02.353437  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:02.353444  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:02.353450  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:02.356013  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:02.356038  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:02.356048  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:02 GMT
	I0108 21:37:02.356056  358628 round_trippers.go:580]     Audit-Id: 314337f1-9dda-43f1-8928-c586af10b598
	I0108 21:37:02.356064  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:02.356072  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:02.356081  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:02.356109  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:02.356500  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 21:37:02.356971  358628 pod_ready.go:92] pod "kube-scheduler-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:37:02.356997  358628 pod_ready.go:81] duration metric: took 398.876475ms waiting for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:37:02.357018  358628 pod_ready.go:38] duration metric: took 6.066502899s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:02.357041  358628 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:37:02.357104  358628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:37:02.372238  358628 command_runner.go:130] > 1126
	I0108 21:37:02.372280  358628 api_server.go:72] duration metric: took 7.193542895s to wait for apiserver process to appear ...
	I0108 21:37:02.372290  358628 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:37:02.372322  358628 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0108 21:37:02.377226  358628 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0108 21:37:02.377304  358628 round_trippers.go:463] GET https://192.168.39.239:8443/version
	I0108 21:37:02.377315  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:02.377327  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:02.377340  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:02.378389  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:37:02.378404  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:02.378414  358628 round_trippers.go:580]     Content-Length: 264
	I0108 21:37:02.378423  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:02 GMT
	I0108 21:37:02.378432  358628 round_trippers.go:580]     Audit-Id: 88aab64e-2bff-4bb9-874e-b743958b8a62
	I0108 21:37:02.378440  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:02.378446  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:02.378460  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:02.378472  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:02.378562  358628 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 21:37:02.378614  358628 api_server.go:141] control plane version: v1.28.4
	I0108 21:37:02.378629  358628 api_server.go:131] duration metric: took 6.331842ms to wait for apiserver health ...
	I0108 21:37:02.378638  358628 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:37:02.554081  358628 request.go:629] Waited for 175.357929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:37:02.554144  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:37:02.554149  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:02.554157  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:02.554169  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:02.558167  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:37:02.558188  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:02.558195  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:02.558201  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:02 GMT
	I0108 21:37:02.558220  358628 round_trippers.go:580]     Audit-Id: 0293e22f-3a16-4d32-bccb-d648ae9ffc3c
	I0108 21:37:02.558226  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:02.558230  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:02.558235  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:02.560545  358628 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"880"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"871","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81886 chars]
	I0108 21:37:02.563156  358628 system_pods.go:59] 12 kube-system pods found
	I0108 21:37:02.563180  358628 system_pods.go:61] "coredns-5dd5756b68-v6dmd" [9c1edff2-3b29-4045-b7b9-935c47115d16] Running
	I0108 21:37:02.563185  358628 system_pods.go:61] "etcd-multinode-962345" [44773ce7-5393-4178-a985-d8bf216f88f1] Running
	I0108 21:37:02.563188  358628 system_pods.go:61] "kindnet-5w9nh" [b84fc0ee-c9b1-4e6c-b066-536f2fd56d52] Running
	I0108 21:37:02.563194  358628 system_pods.go:61] "kindnet-mvv2x" [74892ac7-d01b-459d-8faf-b3a774b7b190] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:37:02.563208  358628 system_pods.go:61] "kindnet-psmlz" [4bcadd03-9934-4b8e-b732-6e1c97265ff7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:37:02.563219  358628 system_pods.go:61] "kube-apiserver-multinode-962345" [bea03251-08df-4434-bc4a-36ef454e151e] Running
	I0108 21:37:02.563232  358628 system_pods.go:61] "kube-controller-manager-multinode-962345" [80b86d62-83f0-4550-988f-6255409d39da] Running
	I0108 21:37:02.563239  358628 system_pods.go:61] "kube-proxy-2c2t6" [4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e] Running
	I0108 21:37:02.563243  358628 system_pods.go:61] "kube-proxy-bmjzs" [fbfa39a4-ba62-4e31-8126-9a320311e846] Running
	I0108 21:37:02.563251  358628 system_pods.go:61] "kube-proxy-cpq6p" [52634211-9ecd-4fd9-a8ce-88f67c668e75] Running
	I0108 21:37:02.563256  358628 system_pods.go:61] "kube-scheduler-multinode-962345" [3778c0a4-1528-4336-9f02-b77a2a6a1c34] Running
	I0108 21:37:02.563262  358628 system_pods.go:61] "storage-provisioner" [da89492c-e129-462d-b84e-2f4a10085550] Running
	I0108 21:37:02.563268  358628 system_pods.go:74] duration metric: took 184.624419ms to wait for pod list to return data ...
	I0108 21:37:02.563278  358628 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:37:02.753585  358628 request.go:629] Waited for 190.21939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:37:02.753660  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:37:02.753674  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:02.753687  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:02.753701  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:02.756562  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:02.756586  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:02.756595  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:02 GMT
	I0108 21:37:02.756603  358628 round_trippers.go:580]     Audit-Id: cce0d0f7-acca-4bef-a6c8-fabac1f42df8
	I0108 21:37:02.756611  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:02.756619  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:02.756632  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:02.756644  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:02.756657  358628 round_trippers.go:580]     Content-Length: 261
	I0108 21:37:02.756691  358628 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"880"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"15bd0783-c8a5-4e50-84fc-9a8ed6232cdb","resourceVersion":"369","creationTimestamp":"2024-01-08T21:26:39Z"}}]}
	I0108 21:37:02.756907  358628 default_sa.go:45] found service account: "default"
	I0108 21:37:02.756930  358628 default_sa.go:55] duration metric: took 193.642489ms for default service account to be created ...
	I0108 21:37:02.756942  358628 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:37:02.953330  358628 request.go:629] Waited for 196.315695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:37:02.953423  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:37:02.953428  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:02.953436  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:02.953442  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:02.958403  358628 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:37:02.958428  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:02.958437  358628 round_trippers.go:580]     Audit-Id: 5eac85a6-cc0c-4fc5-a89f-0d209a3441bf
	I0108 21:37:02.958445  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:02.958453  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:02.958462  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:02.958474  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:02.958487  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:02 GMT
	I0108 21:37:02.960001  358628 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"880"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"871","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81886 chars]
	I0108 21:37:02.963726  358628 system_pods.go:86] 12 kube-system pods found
	I0108 21:37:02.963771  358628 system_pods.go:89] "coredns-5dd5756b68-v6dmd" [9c1edff2-3b29-4045-b7b9-935c47115d16] Running
	I0108 21:37:02.963779  358628 system_pods.go:89] "etcd-multinode-962345" [44773ce7-5393-4178-a985-d8bf216f88f1] Running
	I0108 21:37:02.963786  358628 system_pods.go:89] "kindnet-5w9nh" [b84fc0ee-c9b1-4e6c-b066-536f2fd56d52] Running
	I0108 21:37:02.963806  358628 system_pods.go:89] "kindnet-mvv2x" [74892ac7-d01b-459d-8faf-b3a774b7b190] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:37:02.963818  358628 system_pods.go:89] "kindnet-psmlz" [4bcadd03-9934-4b8e-b732-6e1c97265ff7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 21:37:02.963826  358628 system_pods.go:89] "kube-apiserver-multinode-962345" [bea03251-08df-4434-bc4a-36ef454e151e] Running
	I0108 21:37:02.963846  358628 system_pods.go:89] "kube-controller-manager-multinode-962345" [80b86d62-83f0-4550-988f-6255409d39da] Running
	I0108 21:37:02.963852  358628 system_pods.go:89] "kube-proxy-2c2t6" [4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e] Running
	I0108 21:37:02.963858  358628 system_pods.go:89] "kube-proxy-bmjzs" [fbfa39a4-ba62-4e31-8126-9a320311e846] Running
	I0108 21:37:02.963864  358628 system_pods.go:89] "kube-proxy-cpq6p" [52634211-9ecd-4fd9-a8ce-88f67c668e75] Running
	I0108 21:37:02.963870  358628 system_pods.go:89] "kube-scheduler-multinode-962345" [3778c0a4-1528-4336-9f02-b77a2a6a1c34] Running
	I0108 21:37:02.963877  358628 system_pods.go:89] "storage-provisioner" [da89492c-e129-462d-b84e-2f4a10085550] Running
	I0108 21:37:02.963890  358628 system_pods.go:126] duration metric: took 206.941656ms to wait for k8s-apps to be running ...
	I0108 21:37:02.963898  358628 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:37:02.963960  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:37:02.981609  358628 system_svc.go:56] duration metric: took 17.699942ms WaitForService to wait for kubelet.
	I0108 21:37:02.981639  358628 kubeadm.go:581] duration metric: took 7.802904703s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:37:02.981662  358628 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:37:03.154140  358628 request.go:629] Waited for 172.395683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes
	I0108 21:37:03.154215  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes
	I0108 21:37:03.154220  358628 round_trippers.go:469] Request Headers:
	I0108 21:37:03.154228  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:37:03.154234  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:37:03.157166  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:37:03.157185  358628 round_trippers.go:577] Response Headers:
	I0108 21:37:03.157192  358628 round_trippers.go:580]     Audit-Id: 1b199d94-d49f-4cda-8ea9-532ed4872307
	I0108 21:37:03.157201  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:37:03.157210  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:37:03.157219  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:37:03.157227  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:37:03.157236  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:37:03 GMT
	I0108 21:37:03.157688  358628 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"880"},"items":[{"metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"860","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16180 chars]
	I0108 21:37:03.158517  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:37:03.158545  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:37:03.158558  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:37:03.158568  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:37:03.158572  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:37:03.158578  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:37:03.158587  358628 node_conditions.go:105] duration metric: took 176.919538ms to run NodePressure ...
	I0108 21:37:03.158603  358628 start.go:228] waiting for startup goroutines ...
	I0108 21:37:03.158616  358628 start.go:233] waiting for cluster config update ...
	I0108 21:37:03.158626  358628 start.go:242] writing updated cluster config ...
	I0108 21:37:03.159209  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:37:03.159333  358628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:37:03.162410  358628 out.go:177] * Starting worker node multinode-962345-m02 in cluster multinode-962345
	I0108 21:37:03.163698  358628 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:37:03.163718  358628 cache.go:56] Caching tarball of preloaded images
	I0108 21:37:03.163792  358628 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:37:03.163803  358628 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:37:03.163885  358628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:37:03.164094  358628 start.go:365] acquiring machines lock for multinode-962345-m02: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:37:03.164156  358628 start.go:369] acquired machines lock for "multinode-962345-m02" in 31.016µs
	I0108 21:37:03.164178  358628 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:37:03.164187  358628 fix.go:54] fixHost starting: m02
	I0108 21:37:03.164474  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:37:03.164498  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:37:03.178956  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0108 21:37:03.179429  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:37:03.179917  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:37:03.179943  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:37:03.180286  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:37:03.180466  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:37:03.180608  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetState
	I0108 21:37:03.182157  358628 fix.go:102] recreateIfNeeded on multinode-962345-m02: state=Running err=<nil>
	W0108 21:37:03.182176  358628 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:37:03.184309  358628 out.go:177] * Updating the running kvm2 "multinode-962345-m02" VM ...
	I0108 21:37:03.185773  358628 machine.go:88] provisioning docker machine ...
	I0108 21:37:03.185793  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:37:03.186013  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetMachineName
	I0108 21:37:03.186171  358628 buildroot.go:166] provisioning hostname "multinode-962345-m02"
	I0108 21:37:03.186190  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetMachineName
	I0108 21:37:03.186339  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:37:03.188711  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.189197  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:37:03.189228  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.189392  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:37:03.189590  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:37:03.189733  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:37:03.189856  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:37:03.190011  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:37:03.190332  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:37:03.190352  358628 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-962345-m02 && echo "multinode-962345-m02" | sudo tee /etc/hostname
	I0108 21:37:03.317942  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-962345-m02
	
	I0108 21:37:03.317983  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:37:03.320621  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.320943  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:37:03.320973  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.321211  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:37:03.321433  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:37:03.321643  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:37:03.321824  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:37:03.321992  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:37:03.322356  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:37:03.322377  358628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-962345-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-962345-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-962345-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:37:03.436407  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:37:03.436473  358628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 21:37:03.436497  358628 buildroot.go:174] setting up certificates
	I0108 21:37:03.436508  358628 provision.go:83] configureAuth start
	I0108 21:37:03.436521  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetMachineName
	I0108 21:37:03.436813  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetIP
	I0108 21:37:03.439330  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.439677  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:37:03.439708  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.439833  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:37:03.441709  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.442085  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:37:03.442105  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.442263  358628 provision.go:138] copyHostCerts
	I0108 21:37:03.442294  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:37:03.442330  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 21:37:03.442343  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:37:03.442425  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 21:37:03.442515  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:37:03.442547  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 21:37:03.442558  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:37:03.442597  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 21:37:03.442659  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:37:03.442683  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 21:37:03.442692  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:37:03.442725  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 21:37:03.442788  358628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.multinode-962345-m02 san=[192.168.39.111 192.168.39.111 localhost 127.0.0.1 minikube multinode-962345-m02]
	I0108 21:37:03.664726  358628 provision.go:172] copyRemoteCerts
	I0108 21:37:03.664812  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:37:03.664845  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:37:03.667765  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.668234  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:37:03.668271  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.668486  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:37:03.668727  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:37:03.668931  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:37:03.669115  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:37:03.755563  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:37:03.755648  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:37:03.779679  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:37:03.779762  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:37:03.802682  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:37:03.802784  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:37:03.825024  358628 provision.go:86] duration metric: configureAuth took 388.497254ms
	I0108 21:37:03.825067  358628 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:37:03.825366  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:37:03.825467  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:37:03.828284  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.828633  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:37:03.828669  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:37:03.828854  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:37:03.829113  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:37:03.829314  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:37:03.829465  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:37:03.829642  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:37:03.829980  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:37:03.830002  358628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:38:34.277473  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:38:34.277512  358628 machine.go:91] provisioned docker machine in 1m31.091721653s
	I0108 21:38:34.277528  358628 start.go:300] post-start starting for "multinode-962345-m02" (driver="kvm2")
	I0108 21:38:34.277545  358628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:38:34.277571  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:38:34.277919  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:38:34.277952  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:38:34.280949  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.281393  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:38:34.281424  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.281605  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:38:34.281817  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:38:34.281986  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:38:34.282102  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:38:34.373635  358628 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:38:34.377895  358628 command_runner.go:130] > NAME=Buildroot
	I0108 21:38:34.377921  358628 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0108 21:38:34.377928  358628 command_runner.go:130] > ID=buildroot
	I0108 21:38:34.377936  358628 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:38:34.377943  358628 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:38:34.378052  358628 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:38:34.378089  358628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 21:38:34.378163  358628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 21:38:34.378241  358628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 21:38:34.378255  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /etc/ssl/certs/3419822.pem
	I0108 21:38:34.378372  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:38:34.386856  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:38:34.411744  358628 start.go:303] post-start completed in 134.196738ms
	I0108 21:38:34.411775  358628 fix.go:56] fixHost completed within 1m31.247586917s
	I0108 21:38:34.411808  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:38:34.414320  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.414712  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:38:34.414747  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.414877  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:38:34.415075  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:38:34.415238  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:38:34.415403  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:38:34.415594  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:38:34.415930  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0108 21:38:34.415941  358628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:38:34.527954  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704749914.516415205
	
	I0108 21:38:34.527984  358628 fix.go:206] guest clock: 1704749914.516415205
	I0108 21:38:34.527996  358628 fix.go:219] Guest: 2024-01-08 21:38:34.516415205 +0000 UTC Remote: 2024-01-08 21:38:34.411782074 +0000 UTC m=+448.135416657 (delta=104.633131ms)
	I0108 21:38:34.528017  358628 fix.go:190] guest clock delta is within tolerance: 104.633131ms
	I0108 21:38:34.528024  358628 start.go:83] releasing machines lock for "multinode-962345-m02", held for 1m31.363854274s
	I0108 21:38:34.528048  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:38:34.528368  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetIP
	I0108 21:38:34.530813  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.531146  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:38:34.531170  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.533472  358628 out.go:177] * Found network options:
	I0108 21:38:34.535143  358628 out.go:177]   - NO_PROXY=192.168.39.239
	W0108 21:38:34.536592  358628 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:38:34.536633  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:38:34.537172  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:38:34.537353  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:38:34.537470  358628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:38:34.537518  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	W0108 21:38:34.537588  358628 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:38:34.537674  358628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:38:34.537699  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:38:34.540211  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.540386  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.540644  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:38:34.540675  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.540707  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:38:34.540733  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:34.540773  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:38:34.540951  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:38:34.540954  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:38:34.541142  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:38:34.541144  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:38:34.541286  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:38:34.541288  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:38:34.541438  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:38:34.647021  358628 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:38:34.774715  358628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:38:34.780811  358628 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 21:38:34.780861  358628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:38:34.780946  358628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:38:34.789132  358628 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 21:38:34.789162  358628 start.go:475] detecting cgroup driver to use...
	I0108 21:38:34.789230  358628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:38:34.802424  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:38:34.814017  358628 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:38:34.814073  358628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:38:34.825853  358628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:38:34.837993  358628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:38:34.964467  358628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:38:35.086584  358628 docker.go:219] disabling docker service ...
	I0108 21:38:35.086664  358628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:38:35.101885  358628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:38:35.114149  358628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:38:35.232131  358628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:38:35.363607  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:38:35.377080  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:38:35.393309  358628 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 21:38:35.393760  358628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:38:35.393818  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:38:35.403690  358628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:38:35.403744  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:38:35.413435  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:38:35.422645  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:38:35.431734  358628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:38:35.441081  358628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:38:35.449917  358628 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:38:35.450009  358628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:38:35.458374  358628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:38:35.582177  358628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:38:35.820153  358628 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:38:35.820248  358628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:38:35.825281  358628 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 21:38:35.825299  358628 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:38:35.825306  358628 command_runner.go:130] > Device: 16h/22d	Inode: 1199        Links: 1
	I0108 21:38:35.825312  358628 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:38:35.825318  358628 command_runner.go:130] > Access: 2024-01-08 21:38:35.744551743 +0000
	I0108 21:38:35.825323  358628 command_runner.go:130] > Modify: 2024-01-08 21:38:35.744551743 +0000
	I0108 21:38:35.825332  358628 command_runner.go:130] > Change: 2024-01-08 21:38:35.744551743 +0000
	I0108 21:38:35.825339  358628 command_runner.go:130] >  Birth: -
	I0108 21:38:35.825678  358628 start.go:543] Will wait 60s for crictl version
	I0108 21:38:35.825737  358628 ssh_runner.go:195] Run: which crictl
	I0108 21:38:35.829192  358628 command_runner.go:130] > /usr/bin/crictl
	I0108 21:38:35.829364  358628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:38:35.872261  358628 command_runner.go:130] > Version:  0.1.0
	I0108 21:38:35.872306  358628 command_runner.go:130] > RuntimeName:  cri-o
	I0108 21:38:35.872314  358628 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 21:38:35.872324  358628 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:38:35.872383  358628 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:38:35.872460  358628 ssh_runner.go:195] Run: crio --version
	I0108 21:38:35.920024  358628 command_runner.go:130] > crio version 1.24.1
	I0108 21:38:35.920045  358628 command_runner.go:130] > Version:          1.24.1
	I0108 21:38:35.920058  358628 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:38:35.920062  358628 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:38:35.920068  358628 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:38:35.920073  358628 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:38:35.920077  358628 command_runner.go:130] > Compiler:         gc
	I0108 21:38:35.920081  358628 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:38:35.920086  358628 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:38:35.920093  358628 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:38:35.920098  358628 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:38:35.920102  358628 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:38:35.921408  358628 ssh_runner.go:195] Run: crio --version
	I0108 21:38:35.968082  358628 command_runner.go:130] > crio version 1.24.1
	I0108 21:38:35.968123  358628 command_runner.go:130] > Version:          1.24.1
	I0108 21:38:35.968141  358628 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:38:35.968149  358628 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:38:35.968159  358628 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:38:35.968166  358628 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:38:35.968174  358628 command_runner.go:130] > Compiler:         gc
	I0108 21:38:35.968182  358628 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:38:35.968191  358628 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:38:35.968206  358628 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:38:35.968213  358628 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:38:35.968221  358628 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:38:35.971810  358628 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:38:35.973337  358628 out.go:177]   - env NO_PROXY=192.168.39.239
	I0108 21:38:35.974598  358628 main.go:141] libmachine: (multinode-962345-m02) Calling .GetIP
	I0108 21:38:35.977606  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:35.978025  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:38:35.978061  358628 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:38:35.978221  358628 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:38:35.982133  358628 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0108 21:38:35.982293  358628 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345 for IP: 192.168.39.111
	I0108 21:38:35.982316  358628 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:38:35.982455  358628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 21:38:35.982495  358628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 21:38:35.982508  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:38:35.982525  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:38:35.982537  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:38:35.982549  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:38:35.982600  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 21:38:35.982632  358628 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 21:38:35.982642  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:38:35.982663  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:38:35.982686  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:38:35.982710  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 21:38:35.982753  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:38:35.982777  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem -> /usr/share/ca-certificates/341982.pem
	I0108 21:38:35.982791  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /usr/share/ca-certificates/3419822.pem
	I0108 21:38:35.982806  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:38:35.983194  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:38:36.006901  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:38:36.029103  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:38:36.052053  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:38:36.075322  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 21:38:36.098411  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 21:38:36.124725  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:38:36.147789  358628 ssh_runner.go:195] Run: openssl version
	I0108 21:38:36.153719  358628 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:38:36.154138  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 21:38:36.163651  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 21:38:36.168015  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:38:36.168317  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:38:36.168375  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 21:38:36.173373  358628 command_runner.go:130] > 51391683
	I0108 21:38:36.173756  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 21:38:36.182181  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 21:38:36.191706  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 21:38:36.196113  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:38:36.196142  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:38:36.196183  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 21:38:36.201359  358628 command_runner.go:130] > 3ec20f2e
	I0108 21:38:36.201427  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:38:36.209485  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:38:36.219745  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:38:36.223898  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:38:36.224051  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:38:36.224097  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:38:36.229231  358628 command_runner.go:130] > b5213941
	I0108 21:38:36.229304  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:38:36.237428  358628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:38:36.241234  358628 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:38:36.241281  358628 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:38:36.241378  358628 ssh_runner.go:195] Run: crio config
	I0108 21:38:36.300788  358628 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 21:38:36.300830  358628 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 21:38:36.300842  358628 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 21:38:36.300847  358628 command_runner.go:130] > #
	I0108 21:38:36.300858  358628 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 21:38:36.300871  358628 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 21:38:36.300885  358628 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 21:38:36.300896  358628 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 21:38:36.300903  358628 command_runner.go:130] > # reload'.
	I0108 21:38:36.300909  358628 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 21:38:36.300915  358628 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 21:38:36.300922  358628 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 21:38:36.300929  358628 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 21:38:36.300937  358628 command_runner.go:130] > [crio]
	I0108 21:38:36.300948  358628 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 21:38:36.300957  358628 command_runner.go:130] > # containers images, in this directory.
	I0108 21:38:36.300971  358628 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 21:38:36.300986  358628 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 21:38:36.300996  358628 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 21:38:36.301008  358628 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 21:38:36.301029  358628 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 21:38:36.301041  358628 command_runner.go:130] > storage_driver = "overlay"
	I0108 21:38:36.301051  358628 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 21:38:36.301066  358628 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 21:38:36.301073  358628 command_runner.go:130] > storage_option = [
	I0108 21:38:36.301081  358628 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 21:38:36.301087  358628 command_runner.go:130] > ]
	I0108 21:38:36.301097  358628 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 21:38:36.301105  358628 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 21:38:36.301110  358628 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 21:38:36.301115  358628 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 21:38:36.301123  358628 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 21:38:36.301134  358628 command_runner.go:130] > # always happen on a node reboot
	I0108 21:38:36.301143  358628 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 21:38:36.301163  358628 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 21:38:36.301173  358628 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 21:38:36.301187  358628 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 21:38:36.301198  358628 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 21:38:36.301207  358628 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 21:38:36.301222  358628 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 21:38:36.301233  358628 command_runner.go:130] > # internal_wipe = true
	I0108 21:38:36.301247  358628 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 21:38:36.301261  358628 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 21:38:36.301274  358628 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 21:38:36.301284  358628 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 21:38:36.301296  358628 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 21:38:36.301306  358628 command_runner.go:130] > [crio.api]
	I0108 21:38:36.301315  358628 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 21:38:36.301324  358628 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 21:38:36.301333  358628 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 21:38:36.301344  358628 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 21:38:36.301355  358628 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 21:38:36.301367  358628 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 21:38:36.301377  358628 command_runner.go:130] > # stream_port = "0"
	I0108 21:38:36.301386  358628 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 21:38:36.301396  358628 command_runner.go:130] > # stream_enable_tls = false
	I0108 21:38:36.301409  358628 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 21:38:36.301417  358628 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 21:38:36.301427  358628 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 21:38:36.301449  358628 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 21:38:36.301456  358628 command_runner.go:130] > # minutes.
	I0108 21:38:36.301466  358628 command_runner.go:130] > # stream_tls_cert = ""
	I0108 21:38:36.301479  358628 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 21:38:36.301489  358628 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 21:38:36.301499  358628 command_runner.go:130] > # stream_tls_key = ""
	I0108 21:38:36.301512  358628 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 21:38:36.301524  358628 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 21:38:36.301535  358628 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 21:38:36.301542  358628 command_runner.go:130] > # stream_tls_ca = ""
	I0108 21:38:36.301555  358628 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:38:36.301572  358628 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 21:38:36.301587  358628 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:38:36.301598  358628 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 21:38:36.301621  358628 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 21:38:36.301632  358628 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 21:38:36.301637  358628 command_runner.go:130] > [crio.runtime]
	I0108 21:38:36.301651  358628 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 21:38:36.301663  358628 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 21:38:36.301671  358628 command_runner.go:130] > # "nofile=1024:2048"
	I0108 21:38:36.301684  358628 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 21:38:36.301692  358628 command_runner.go:130] > # default_ulimits = [
	I0108 21:38:36.301698  358628 command_runner.go:130] > # ]
	I0108 21:38:36.301712  358628 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 21:38:36.301720  358628 command_runner.go:130] > # no_pivot = false
	I0108 21:38:36.301733  358628 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 21:38:36.301745  358628 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 21:38:36.301757  358628 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 21:38:36.301770  358628 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 21:38:36.301782  358628 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 21:38:36.301799  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:38:36.301811  358628 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 21:38:36.301821  358628 command_runner.go:130] > # Cgroup setting for conmon
	I0108 21:38:36.301833  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 21:38:36.301844  358628 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 21:38:36.301858  358628 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 21:38:36.301871  358628 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 21:38:36.301886  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:38:36.301896  358628 command_runner.go:130] > conmon_env = [
	I0108 21:38:36.301911  358628 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 21:38:36.301920  358628 command_runner.go:130] > ]
	I0108 21:38:36.301930  358628 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 21:38:36.301942  358628 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 21:38:36.301955  358628 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 21:38:36.301963  358628 command_runner.go:130] > # default_env = [
	I0108 21:38:36.301973  358628 command_runner.go:130] > # ]
	I0108 21:38:36.301986  358628 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 21:38:36.301997  358628 command_runner.go:130] > # selinux = false
	I0108 21:38:36.302009  358628 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 21:38:36.302023  358628 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 21:38:36.302036  358628 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 21:38:36.302047  358628 command_runner.go:130] > # seccomp_profile = ""
	I0108 21:38:36.302060  358628 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 21:38:36.302072  358628 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 21:38:36.302096  358628 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 21:38:36.302108  358628 command_runner.go:130] > # which might increase security.
	I0108 21:38:36.302120  358628 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 21:38:36.302135  358628 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 21:38:36.302149  358628 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 21:38:36.302162  358628 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 21:38:36.302176  358628 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 21:38:36.302188  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:38:36.302231  358628 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 21:38:36.302249  358628 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 21:38:36.302260  358628 command_runner.go:130] > # the cgroup blockio controller.
	I0108 21:38:36.302268  358628 command_runner.go:130] > # blockio_config_file = ""
	I0108 21:38:36.302283  358628 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 21:38:36.302293  358628 command_runner.go:130] > # irqbalance daemon.
	I0108 21:38:36.302306  358628 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 21:38:36.302321  358628 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 21:38:36.302333  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:38:36.302344  358628 command_runner.go:130] > # rdt_config_file = ""
	I0108 21:38:36.302357  358628 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 21:38:36.302368  358628 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 21:38:36.302379  358628 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 21:38:36.302390  358628 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 21:38:36.302404  358628 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 21:38:36.302418  358628 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 21:38:36.302428  358628 command_runner.go:130] > # will be added.
	I0108 21:38:36.302436  358628 command_runner.go:130] > # default_capabilities = [
	I0108 21:38:36.302445  358628 command_runner.go:130] > # 	"CHOWN",
	I0108 21:38:36.302453  358628 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 21:38:36.302463  358628 command_runner.go:130] > # 	"FSETID",
	I0108 21:38:36.302473  358628 command_runner.go:130] > # 	"FOWNER",
	I0108 21:38:36.302480  358628 command_runner.go:130] > # 	"SETGID",
	I0108 21:38:36.302491  358628 command_runner.go:130] > # 	"SETUID",
	I0108 21:38:36.302501  358628 command_runner.go:130] > # 	"SETPCAP",
	I0108 21:38:36.302510  358628 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 21:38:36.302520  358628 command_runner.go:130] > # 	"KILL",
	I0108 21:38:36.302526  358628 command_runner.go:130] > # ]
	I0108 21:38:36.302542  358628 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 21:38:36.302556  358628 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:38:36.302572  358628 command_runner.go:130] > # default_sysctls = [
	I0108 21:38:36.302581  358628 command_runner.go:130] > # ]
	I0108 21:38:36.302595  358628 command_runner.go:130] > # List of devices on the host that a
	I0108 21:38:36.302609  358628 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 21:38:36.302619  358628 command_runner.go:130] > # allowed_devices = [
	I0108 21:38:36.302627  358628 command_runner.go:130] > # 	"/dev/fuse",
	I0108 21:38:36.302635  358628 command_runner.go:130] > # ]
	I0108 21:38:36.302645  358628 command_runner.go:130] > # List of additional devices. specified as
	I0108 21:38:36.302661  358628 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 21:38:36.302673  358628 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 21:38:36.302699  358628 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:38:36.302709  358628 command_runner.go:130] > # additional_devices = [
	I0108 21:38:36.302715  358628 command_runner.go:130] > # ]
	I0108 21:38:36.302728  358628 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 21:38:36.302738  358628 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 21:38:36.302749  358628 command_runner.go:130] > # 	"/etc/cdi",
	I0108 21:38:36.302759  358628 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 21:38:36.302768  358628 command_runner.go:130] > # ]
	I0108 21:38:36.302779  358628 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 21:38:36.302793  358628 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 21:38:36.302803  358628 command_runner.go:130] > # Defaults to false.
	I0108 21:38:36.302812  358628 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 21:38:36.302826  358628 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 21:38:36.302839  358628 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 21:38:36.302849  358628 command_runner.go:130] > # hooks_dir = [
	I0108 21:38:36.302862  358628 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 21:38:36.302871  358628 command_runner.go:130] > # ]
	I0108 21:38:36.302883  358628 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 21:38:36.302897  358628 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 21:38:36.302910  358628 command_runner.go:130] > # its default mounts from the following two files:
	I0108 21:38:36.302916  358628 command_runner.go:130] > #
	I0108 21:38:36.302930  358628 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 21:38:36.302944  358628 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 21:38:36.302957  358628 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 21:38:36.302967  358628 command_runner.go:130] > #
	I0108 21:38:36.302980  358628 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 21:38:36.302994  358628 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 21:38:36.303010  358628 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 21:38:36.303022  358628 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 21:38:36.303030  358628 command_runner.go:130] > #
	I0108 21:38:36.303039  358628 command_runner.go:130] > # default_mounts_file = ""
	I0108 21:38:36.303052  358628 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 21:38:36.303066  358628 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 21:38:36.303076  358628 command_runner.go:130] > pids_limit = 1024
	I0108 21:38:36.303089  358628 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 21:38:36.303101  358628 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 21:38:36.303116  358628 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 21:38:36.303136  358628 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 21:38:36.303146  358628 command_runner.go:130] > # log_size_max = -1
	I0108 21:38:36.303161  358628 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 21:38:36.303170  358628 command_runner.go:130] > # log_to_journald = false
	I0108 21:38:36.303184  358628 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 21:38:36.303199  358628 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 21:38:36.303211  358628 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 21:38:36.303224  358628 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 21:38:36.303237  358628 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 21:38:36.303247  358628 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 21:38:36.303258  358628 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 21:38:36.303268  358628 command_runner.go:130] > # read_only = false
	I0108 21:38:36.303282  358628 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 21:38:36.303298  358628 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 21:38:36.303309  358628 command_runner.go:130] > # live configuration reload.
	I0108 21:38:36.303318  358628 command_runner.go:130] > # log_level = "info"
	I0108 21:38:36.303330  358628 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 21:38:36.303342  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:38:36.303350  358628 command_runner.go:130] > # log_filter = ""
	I0108 21:38:36.303377  358628 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 21:38:36.303392  358628 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 21:38:36.303403  358628 command_runner.go:130] > # separated by comma.
	I0108 21:38:36.303413  358628 command_runner.go:130] > # uid_mappings = ""
	I0108 21:38:36.303430  358628 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 21:38:36.303444  358628 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 21:38:36.303452  358628 command_runner.go:130] > # separated by comma.
	I0108 21:38:36.303462  358628 command_runner.go:130] > # gid_mappings = ""
	I0108 21:38:36.303476  358628 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 21:38:36.303490  358628 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:38:36.303504  358628 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:38:36.303516  358628 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 21:38:36.303530  358628 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 21:38:36.303544  358628 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:38:36.303557  358628 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:38:36.303572  358628 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 21:38:36.303586  358628 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 21:38:36.303600  358628 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 21:38:36.303613  358628 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 21:38:36.303624  358628 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 21:38:36.303635  358628 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 21:38:36.303647  358628 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 21:38:36.303660  358628 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 21:38:36.303671  358628 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 21:38:36.303682  358628 command_runner.go:130] > drop_infra_ctr = false
	I0108 21:38:36.303694  358628 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 21:38:36.303707  358628 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 21:38:36.303722  358628 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 21:38:36.303733  358628 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 21:38:36.303744  358628 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 21:38:36.303756  358628 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 21:38:36.303763  358628 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 21:38:36.303775  358628 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 21:38:36.303786  358628 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 21:38:36.303796  358628 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 21:38:36.303809  358628 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 21:38:36.303823  358628 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 21:38:36.303834  358628 command_runner.go:130] > # default_runtime = "runc"
	I0108 21:38:36.303845  358628 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 21:38:36.303861  358628 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 21:38:36.303884  358628 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 21:38:36.303897  358628 command_runner.go:130] > # creation as a file is not desired either.
	I0108 21:38:36.303913  358628 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 21:38:36.303923  358628 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 21:38:36.303930  358628 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 21:38:36.303933  358628 command_runner.go:130] > # ]
	I0108 21:38:36.303940  358628 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 21:38:36.303952  358628 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 21:38:36.303965  358628 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 21:38:36.303978  358628 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 21:38:36.303984  358628 command_runner.go:130] > #
	I0108 21:38:36.303995  358628 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 21:38:36.304005  358628 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 21:38:36.304015  358628 command_runner.go:130] > #  runtime_type = "oci"
	I0108 21:38:36.304025  358628 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 21:38:36.304036  358628 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 21:38:36.304045  358628 command_runner.go:130] > #  allowed_annotations = []
	I0108 21:38:36.304049  358628 command_runner.go:130] > # Where:
	I0108 21:38:36.304065  358628 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 21:38:36.304079  358628 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 21:38:36.304092  358628 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 21:38:36.304105  358628 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 21:38:36.304115  358628 command_runner.go:130] > #   in $PATH.
	I0108 21:38:36.304127  358628 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 21:38:36.304138  358628 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 21:38:36.304147  358628 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 21:38:36.304154  358628 command_runner.go:130] > #   state.
	I0108 21:38:36.304167  358628 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 21:38:36.304183  358628 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 21:38:36.304196  358628 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 21:38:36.304208  358628 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 21:38:36.304222  358628 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 21:38:36.304232  358628 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 21:38:36.304239  358628 command_runner.go:130] > #   The currently recognized values are:
	I0108 21:38:36.304252  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 21:38:36.304267  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 21:38:36.304282  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 21:38:36.304296  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 21:38:36.304311  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 21:38:36.304325  358628 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 21:38:36.304335  358628 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 21:38:36.304347  358628 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 21:38:36.304359  358628 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 21:38:36.304370  358628 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 21:38:36.304380  358628 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 21:38:36.304388  358628 command_runner.go:130] > runtime_type = "oci"
	I0108 21:38:36.304400  358628 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 21:38:36.304410  358628 command_runner.go:130] > runtime_config_path = ""
	I0108 21:38:36.304417  358628 command_runner.go:130] > monitor_path = ""
	I0108 21:38:36.304421  358628 command_runner.go:130] > monitor_cgroup = ""
	I0108 21:38:36.304426  358628 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 21:38:36.304436  358628 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 21:38:36.304446  358628 command_runner.go:130] > # running containers
	I0108 21:38:36.304454  358628 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 21:38:36.304470  358628 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 21:38:36.304502  358628 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 21:38:36.304514  358628 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 21:38:36.304520  358628 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 21:38:36.304526  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 21:38:36.304536  358628 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 21:38:36.304548  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 21:38:36.304557  358628 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 21:38:36.304572  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 21:38:36.304586  358628 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 21:38:36.304598  358628 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 21:38:36.304611  358628 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 21:38:36.304621  358628 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 21:38:36.304635  358628 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 21:38:36.304649  358628 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 21:38:36.304664  358628 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 21:38:36.304680  358628 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 21:38:36.304692  358628 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 21:38:36.304708  358628 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 21:38:36.304715  358628 command_runner.go:130] > # Example:
	I0108 21:38:36.304720  358628 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 21:38:36.304731  358628 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 21:38:36.304743  358628 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 21:38:36.304752  358628 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 21:38:36.304762  358628 command_runner.go:130] > # cpuset = 0
	I0108 21:38:36.304769  358628 command_runner.go:130] > # cpushares = "0-1"
	I0108 21:38:36.304778  358628 command_runner.go:130] > # Where:
	I0108 21:38:36.304786  358628 command_runner.go:130] > # The workload name is workload-type.
	I0108 21:38:36.304800  358628 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 21:38:36.304810  358628 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 21:38:36.304819  358628 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 21:38:36.304832  358628 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 21:38:36.304846  358628 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 21:38:36.304852  358628 command_runner.go:130] > # 
	I0108 21:38:36.304866  358628 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 21:38:36.304874  358628 command_runner.go:130] > #
	I0108 21:38:36.304884  358628 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 21:38:36.304897  358628 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 21:38:36.304910  358628 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 21:38:36.304919  358628 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 21:38:36.304928  358628 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 21:38:36.304938  358628 command_runner.go:130] > [crio.image]
	I0108 21:38:36.304952  358628 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 21:38:36.304963  358628 command_runner.go:130] > # default_transport = "docker://"
	I0108 21:38:36.304976  358628 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 21:38:36.304989  358628 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:38:36.305000  358628 command_runner.go:130] > # global_auth_file = ""
	I0108 21:38:36.305008  358628 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 21:38:36.305013  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:38:36.305024  358628 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 21:38:36.305038  358628 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 21:38:36.305049  358628 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:38:36.305060  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:38:36.305071  358628 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 21:38:36.305082  358628 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 21:38:36.305094  358628 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 21:38:36.305105  358628 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 21:38:36.305112  358628 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 21:38:36.305123  358628 command_runner.go:130] > # pause_command = "/pause"
	I0108 21:38:36.305135  358628 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 21:38:36.305149  358628 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 21:38:36.305162  358628 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 21:38:36.305176  358628 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 21:38:36.305188  358628 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 21:38:36.305196  358628 command_runner.go:130] > # signature_policy = ""
	I0108 21:38:36.305202  358628 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 21:38:36.305215  358628 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 21:38:36.305225  358628 command_runner.go:130] > # changing them here.
	I0108 21:38:36.305233  358628 command_runner.go:130] > # insecure_registries = [
	I0108 21:38:36.305242  358628 command_runner.go:130] > # ]
	I0108 21:38:36.305252  358628 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 21:38:36.305264  358628 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 21:38:36.305274  358628 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 21:38:36.305286  358628 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 21:38:36.305293  358628 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 21:38:36.305302  358628 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 21:38:36.305308  358628 command_runner.go:130] > # CNI plugins.
	I0108 21:38:36.305318  358628 command_runner.go:130] > [crio.network]
	I0108 21:38:36.305328  358628 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 21:38:36.305341  358628 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 21:38:36.305351  358628 command_runner.go:130] > # cni_default_network = ""
	I0108 21:38:36.305362  358628 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 21:38:36.305373  358628 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 21:38:36.305385  358628 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 21:38:36.305392  358628 command_runner.go:130] > # plugin_dirs = [
	I0108 21:38:36.305397  358628 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 21:38:36.305406  358628 command_runner.go:130] > # ]
	I0108 21:38:36.305416  358628 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 21:38:36.305426  358628 command_runner.go:130] > [crio.metrics]
	I0108 21:38:36.305435  358628 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 21:38:36.305446  358628 command_runner.go:130] > enable_metrics = true
	I0108 21:38:36.305457  358628 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 21:38:36.305467  358628 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 21:38:36.305480  358628 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 21:38:36.305493  358628 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 21:38:36.305503  358628 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 21:38:36.305509  358628 command_runner.go:130] > # metrics_collectors = [
	I0108 21:38:36.305519  358628 command_runner.go:130] > # 	"operations",
	I0108 21:38:36.305530  358628 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 21:38:36.305540  358628 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 21:38:36.305550  358628 command_runner.go:130] > # 	"operations_errors",
	I0108 21:38:36.305558  358628 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 21:38:36.305572  358628 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 21:38:36.305581  358628 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 21:38:36.305586  358628 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 21:38:36.305595  358628 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 21:38:36.305603  358628 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 21:38:36.305613  358628 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 21:38:36.305621  358628 command_runner.go:130] > # 	"containers_oom_total",
	I0108 21:38:36.305632  358628 command_runner.go:130] > # 	"containers_oom",
	I0108 21:38:36.305641  358628 command_runner.go:130] > # 	"processes_defunct",
	I0108 21:38:36.305650  358628 command_runner.go:130] > # 	"operations_total",
	I0108 21:38:36.305660  358628 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 21:38:36.305671  358628 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 21:38:36.305679  358628 command_runner.go:130] > # 	"operations_errors_total",
	I0108 21:38:36.305686  358628 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 21:38:36.305693  358628 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 21:38:36.305703  358628 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 21:38:36.305715  358628 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 21:38:36.305724  358628 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 21:38:36.305734  358628 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 21:38:36.305741  358628 command_runner.go:130] > # ]
	I0108 21:38:36.305753  358628 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 21:38:36.305762  358628 command_runner.go:130] > # metrics_port = 9090
	I0108 21:38:36.305771  358628 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 21:38:36.305776  358628 command_runner.go:130] > # metrics_socket = ""
	I0108 21:38:36.305789  358628 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 21:38:36.305802  358628 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 21:38:36.305814  358628 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 21:38:36.305825  358628 command_runner.go:130] > # certificate on any modification event.
	I0108 21:38:36.305835  358628 command_runner.go:130] > # metrics_cert = ""
	I0108 21:38:36.305848  358628 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 21:38:36.305859  358628 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 21:38:36.305868  358628 command_runner.go:130] > # metrics_key = ""
	I0108 21:38:36.305875  358628 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 21:38:36.305884  358628 command_runner.go:130] > [crio.tracing]
	I0108 21:38:36.305896  358628 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 21:38:36.305906  358628 command_runner.go:130] > # enable_tracing = false
	I0108 21:38:36.305917  358628 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 21:38:36.305928  358628 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 21:38:36.305940  358628 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 21:38:36.305951  358628 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 21:38:36.305959  358628 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 21:38:36.305965  358628 command_runner.go:130] > [crio.stats]
	I0108 21:38:36.305979  358628 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 21:38:36.305991  358628 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 21:38:36.306002  358628 command_runner.go:130] > # stats_collection_period = 0
	I0108 21:38:36.306137  358628 command_runner.go:130] ! time="2024-01-08 21:38:36.287087712Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 21:38:36.306157  358628 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 21:38:36.306397  358628 cni.go:84] Creating CNI manager for ""
	I0108 21:38:36.306410  358628 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:38:36.306419  358628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:38:36.306473  358628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.111 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-962345 NodeName:multinode-962345-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:38:36.306672  358628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-962345-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:38:36.306757  358628 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-962345-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:38:36.306825  358628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:38:36.316014  358628 command_runner.go:130] > kubeadm
	I0108 21:38:36.316036  358628 command_runner.go:130] > kubectl
	I0108 21:38:36.316047  358628 command_runner.go:130] > kubelet
	I0108 21:38:36.316077  358628 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:38:36.316141  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 21:38:36.324119  358628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0108 21:38:36.339486  358628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:38:36.356232  358628 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0108 21:38:36.360030  358628 command_runner.go:130] > 192.168.39.239	control-plane.minikube.internal
	I0108 21:38:36.360108  358628 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:38:36.360390  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:38:36.360456  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:38:36.360498  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:38:36.375150  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35271
	I0108 21:38:36.375604  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:38:36.376051  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:38:36.376081  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:38:36.376455  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:38:36.376654  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:38:36.376806  358628 start.go:304] JoinCluster: &{Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:38:36.376926  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 21:38:36.376944  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:38:36.379855  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:38:36.380247  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:38:36.380273  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:38:36.380401  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:38:36.380576  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:38:36.380723  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:38:36.380857  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:38:36.560938  358628 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token eewnj1.4kz3fk2m0jc0htiw --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 21:38:36.561283  358628 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:38:36.561353  358628 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:38:36.561718  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:38:36.561751  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:38:36.576217  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0108 21:38:36.576678  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:38:36.577160  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:38:36.577182  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:38:36.577530  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:38:36.577741  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:38:36.577941  358628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-962345-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 21:38:36.577964  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:38:36.580745  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:38:36.581116  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:38:36.581143  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:38:36.581290  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:38:36.581455  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:38:36.581598  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:38:36.581750  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:38:36.805370  358628 command_runner.go:130] > node/multinode-962345-m02 cordoned
	I0108 21:38:39.862098  358628 command_runner.go:130] > pod "busybox-5bc68d56bd-qwxd6" has DeletionTimestamp older than 1 seconds, skipping
	I0108 21:38:39.862127  358628 command_runner.go:130] > node/multinode-962345-m02 drained
	I0108 21:38:39.863696  358628 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 21:38:39.863717  358628 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-mvv2x, kube-system/kube-proxy-2c2t6
	I0108 21:38:39.863739  358628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-962345-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.28577472s)
	I0108 21:38:39.863753  358628 node.go:108] successfully drained node "m02"
	I0108 21:38:39.864126  358628 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:38:39.864345  358628 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:38:39.864810  358628 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 21:38:39.864867  358628 round_trippers.go:463] DELETE https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:38:39.864874  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:39.864882  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:39.864888  358628 round_trippers.go:473]     Content-Type: application/json
	I0108 21:38:39.864894  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:39.881748  358628 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0108 21:38:39.881765  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:39.881772  358628 round_trippers.go:580]     Content-Length: 171
	I0108 21:38:39.881777  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:39 GMT
	I0108 21:38:39.881782  358628 round_trippers.go:580]     Audit-Id: 2c5cc8e7-a9e8-4297-9196-b3362dd06cb0
	I0108 21:38:39.881787  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:39.881792  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:39.881797  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:39.881805  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:39.881899  358628 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-962345-m02","kind":"nodes","uid":"fa6bdb61-41a8-407b-b80f-8e8c00e94a73"}}
	I0108 21:38:39.881932  358628 node.go:124] successfully deleted node "m02"
	I0108 21:38:39.881950  358628 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:38:39.881974  358628 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:38:39.882002  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token eewnj1.4kz3fk2m0jc0htiw --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-962345-m02"
	I0108 21:38:39.988930  358628 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:38:40.214873  358628 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 21:38:40.214915  358628 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 21:38:40.279342  358628 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:38:40.279439  358628 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:38:40.279801  358628 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:38:40.457101  358628 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 21:38:40.982190  358628 command_runner.go:130] > This node has joined the cluster:
	I0108 21:38:40.982227  358628 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 21:38:40.982238  358628 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 21:38:40.982247  358628 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 21:38:40.984823  358628 command_runner.go:130] ! W0108 21:38:39.977313    2646 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 21:38:40.984849  358628 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 21:38:40.984861  358628 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 21:38:40.984877  358628 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 21:38:40.985005  358628 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token eewnj1.4kz3fk2m0jc0htiw --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-962345-m02": (1.102980607s)
	I0108 21:38:40.985039  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 21:38:41.274277  358628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-962345 minikube.k8s.io/updated_at=2024_01_08T21_38_41_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:38:41.381268  358628 command_runner.go:130] > node/multinode-962345-m02 labeled
	I0108 21:38:41.395250  358628 command_runner.go:130] > node/multinode-962345-m03 labeled
	I0108 21:38:41.397407  358628 start.go:306] JoinCluster complete in 5.020594543s
	I0108 21:38:41.397436  358628 cni.go:84] Creating CNI manager for ""
	I0108 21:38:41.397443  358628 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:38:41.397503  358628 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:38:41.404146  358628 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:38:41.404171  358628 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:38:41.404181  358628 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:38:41.404191  358628 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:38:41.404216  358628 command_runner.go:130] > Access: 2024-01-08 21:36:17.412212418 +0000
	I0108 21:38:41.404228  358628 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0108 21:38:41.404236  358628 command_runner.go:130] > Change: 2024-01-08 21:36:15.543212418 +0000
	I0108 21:38:41.404245  358628 command_runner.go:130] >  Birth: -
	I0108 21:38:41.404431  358628 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:38:41.404448  358628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:38:41.422991  358628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:38:41.755340  358628 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:38:41.759262  358628 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:38:41.761991  358628 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:38:41.772175  358628 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:38:41.774909  358628 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:38:41.775176  358628 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:38:41.775566  358628 round_trippers.go:463] GET https://192.168.39.239:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:38:41.775581  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.775590  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.775623  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.777670  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:41.777692  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.777700  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.777708  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.777716  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.777725  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.777738  358628 round_trippers.go:580]     Content-Length: 291
	I0108 21:38:41.777747  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.777755  358628 round_trippers.go:580]     Audit-Id: bcbffb4e-85e3-42e7-b748-aade42932a5b
	I0108 21:38:41.777782  358628 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9a0db73a-68c0-469b-b860-0baad5e41646","resourceVersion":"883","creationTimestamp":"2024-01-08T21:26:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:38:41.777887  358628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-962345" context rescaled to 1 replicas
	I0108 21:38:41.777922  358628 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 21:38:41.780083  358628 out.go:177] * Verifying Kubernetes components...
	I0108 21:38:41.781491  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:38:41.795582  358628 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:38:41.795928  358628 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:38:41.796181  358628 node_ready.go:35] waiting up to 6m0s for node "multinode-962345-m02" to be "Ready" ...
	I0108 21:38:41.796259  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:38:41.796266  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.796274  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.796281  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.798969  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:41.798992  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.799003  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.799012  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.799019  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.799032  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.799041  358628 round_trippers.go:580]     Audit-Id: 3fa85b0f-0818-47f0-ad52-f322a8d6bba9
	I0108 21:38:41.799051  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.799191  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"3106d5b8-f2c3-437d-bf0a-adb8732a102b","resourceVersion":"1036","creationTimestamp":"2024-01-08T21:38:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_38_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:38:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 21:38:41.799498  358628 node_ready.go:49] node "multinode-962345-m02" has status "Ready":"True"
	I0108 21:38:41.799515  358628 node_ready.go:38] duration metric: took 3.31649ms waiting for node "multinode-962345-m02" to be "Ready" ...
	I0108 21:38:41.799526  358628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:38:41.799592  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:38:41.799602  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.799612  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.799623  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.803294  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:38:41.803313  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.803323  358628 round_trippers.go:580]     Audit-Id: 6506cee5-697c-46f0-b817-e06f518a4802
	I0108 21:38:41.803332  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.803337  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.803342  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.803351  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.803372  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.804749  358628 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1041"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"871","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82246 chars]
	I0108 21:38:41.807218  358628 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.807289  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:38:41.807300  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.807309  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.807317  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.809484  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:41.809510  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.809517  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.809523  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.809529  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.809537  358628 round_trippers.go:580]     Audit-Id: b2a33512-ab05-4abd-b178-7c2285779fa1
	I0108 21:38:41.809542  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.809548  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.809702  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"871","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 21:38:41.810159  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:38:41.810173  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.810181  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.810187  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.812074  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:38:41.812088  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.812097  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.812105  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.812118  358628 round_trippers.go:580]     Audit-Id: 8d900ea7-3226-4ad3-a99a-dd1879077fcd
	I0108 21:38:41.812127  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.812135  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.812147  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.812415  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:38:41.812813  358628 pod_ready.go:92] pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace has status "Ready":"True"
	I0108 21:38:41.812833  358628 pod_ready.go:81] duration metric: took 5.594711ms waiting for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.812845  358628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.812915  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-962345
	I0108 21:38:41.812926  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.812935  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.812947  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.814706  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:38:41.814722  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.814738  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.814746  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.814753  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.814762  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.814775  358628 round_trippers.go:580]     Audit-Id: 3bca5656-ce45-4078-a8ad-4cdaaf427f03
	I0108 21:38:41.814785  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.815222  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-962345","namespace":"kube-system","uid":"44773ce7-5393-4178-a985-d8bf216f88f1","resourceVersion":"864","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.239:2379","kubernetes.io/config.hash":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.mirror":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.seen":"2024-01-08T21:26:26.755438257Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 21:38:41.815567  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:38:41.815582  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.815592  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.815600  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.817347  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:38:41.817365  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.817376  358628 round_trippers.go:580]     Audit-Id: 5bf49451-77ba-4310-975f-325b48339453
	I0108 21:38:41.817384  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.817389  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.817398  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.817403  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.817413  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.817792  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:38:41.818075  358628 pod_ready.go:92] pod "etcd-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:38:41.818090  358628 pod_ready.go:81] duration metric: took 5.233377ms waiting for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.818115  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.818172  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-962345
	I0108 21:38:41.818182  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.818193  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.818206  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.820057  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:38:41.820071  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.820079  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.820087  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.820097  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.820110  358628 round_trippers.go:580]     Audit-Id: 1efcaf6c-f20c-4dd5-a4c3-3cb5b76065af
	I0108 21:38:41.820118  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.820132  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.820386  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-962345","namespace":"kube-system","uid":"bea03251-08df-4434-bc4a-36ef454e151e","resourceVersion":"862","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.239:8443","kubernetes.io/config.hash":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.mirror":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.seen":"2024-01-08T21:26:26.755439577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 21:38:41.820720  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:38:41.820730  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.820737  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.820742  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.822493  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:38:41.822507  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.822517  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.822525  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.822534  358628 round_trippers.go:580]     Audit-Id: f2cf3035-fb0c-4787-9445-c7a292cbac9f
	I0108 21:38:41.822548  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.822557  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.822566  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.822822  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:38:41.823080  358628 pod_ready.go:92] pod "kube-apiserver-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:38:41.823092  358628 pod_ready.go:81] duration metric: took 4.96511ms waiting for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.823100  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.823140  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-962345
	I0108 21:38:41.823147  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.823154  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.823160  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.825089  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:38:41.825103  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.825111  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.825119  358628 round_trippers.go:580]     Audit-Id: 31edd62d-32e2-43c6-8012-f0e7ca34b2be
	I0108 21:38:41.825128  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.825142  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.825151  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.825163  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.825321  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-962345","namespace":"kube-system","uid":"80b86d62-83f0-4550-988f-6255409d39da","resourceVersion":"865","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.mirror":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.seen":"2024-01-08T21:26:26.755427365Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 21:38:41.825667  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:38:41.825680  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.825687  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:41.825693  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.827192  358628 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:38:41.827205  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:41.827214  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:41.827222  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:41.827231  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:41.827245  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:41.827255  358628 round_trippers.go:580]     Audit-Id: bc3bf5c2-3d18-4448-8d40-849881ef2f08
	I0108 21:38:41.827267  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:41.827486  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:38:41.827756  358628 pod_ready.go:92] pod "kube-controller-manager-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:38:41.827770  358628 pod_ready.go:81] duration metric: took 4.664993ms waiting for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.827793  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:41.997243  358628 request.go:629] Waited for 169.378888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:38:41.997358  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:38:41.997369  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:41.997384  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:41.997398  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:42.000599  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:38:42.000625  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:42.000639  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:42.000648  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:42.000656  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:42.000662  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:41 GMT
	I0108 21:38:42.000667  358628 round_trippers.go:580]     Audit-Id: 64c53923-465f-425b-a841-0ab3944f27cc
	I0108 21:38:42.000672  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:42.000952  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2c2t6","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e","resourceVersion":"1008","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0108 21:38:42.196862  358628 request.go:629] Waited for 195.36991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:38:42.196936  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:38:42.196943  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:42.196955  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:42.196967  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:42.199557  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:42.199578  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:42.199593  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:42.199600  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:42.199607  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:42.199615  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:42.199624  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:42 GMT
	I0108 21:38:42.199636  358628 round_trippers.go:580]     Audit-Id: 6f1e0d6f-ee16-4033-91c8-94a2c5c36711
	I0108 21:38:42.199873  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"3106d5b8-f2c3-437d-bf0a-adb8732a102b","resourceVersion":"1036","creationTimestamp":"2024-01-08T21:38:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_38_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:38:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 21:38:42.396549  358628 request.go:629] Waited for 68.158201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:38:42.396613  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:38:42.396618  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:42.396626  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:42.396633  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:42.399209  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:42.399230  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:42.399240  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:42 GMT
	I0108 21:38:42.399248  358628 round_trippers.go:580]     Audit-Id: 9af3845a-660b-4ab2-b077-e0f6b9b4e62f
	I0108 21:38:42.399257  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:42.399265  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:42.399275  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:42.399285  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:42.399585  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2c2t6","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e","resourceVersion":"1008","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0108 21:38:42.597367  358628 request.go:629] Waited for 197.357742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:38:42.597462  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:38:42.597469  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:42.597478  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:42.597487  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:42.600507  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:38:42.600612  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:42.600625  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:42.600635  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:42.600644  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:42.600656  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:42 GMT
	I0108 21:38:42.600662  358628 round_trippers.go:580]     Audit-Id: 86866ccd-2958-4df9-9312-83fa250f9cd5
	I0108 21:38:42.600667  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:42.600976  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"3106d5b8-f2c3-437d-bf0a-adb8732a102b","resourceVersion":"1036","creationTimestamp":"2024-01-08T21:38:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_38_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:38:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 21:38:42.828653  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:38:42.828691  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:42.828701  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:42.828710  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:42.831242  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:42.831268  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:42.831278  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:42.831285  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:42.831293  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:42 GMT
	I0108 21:38:42.831300  358628 round_trippers.go:580]     Audit-Id: 5fa501bf-497b-4482-9e3a-9f8eb5723b56
	I0108 21:38:42.831306  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:42.831314  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:42.831533  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2c2t6","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e","resourceVersion":"1053","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0108 21:38:42.996401  358628 request.go:629] Waited for 164.276263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:38:42.996543  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:38:42.996554  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:42.996562  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:42.996568  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:42.999351  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:42.999393  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:42.999404  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:42 GMT
	I0108 21:38:42.999412  358628 round_trippers.go:580]     Audit-Id: 3a1f40f0-f3ae-4097-9b3b-3c10283bd730
	I0108 21:38:42.999418  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:42.999425  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:42.999433  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:42.999441  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:43.000178  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"3106d5b8-f2c3-437d-bf0a-adb8732a102b","resourceVersion":"1036","creationTimestamp":"2024-01-08T21:38:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_38_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:38:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 21:38:43.000450  358628 pod_ready.go:92] pod "kube-proxy-2c2t6" in "kube-system" namespace has status "Ready":"True"
	I0108 21:38:43.000466  358628 pod_ready.go:81] duration metric: took 1.172668672s waiting for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:43.000476  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:43.196936  358628 request.go:629] Waited for 196.377975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:38:43.196999  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:38:43.197011  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:43.197019  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:43.197029  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:43.200472  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:38:43.200492  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:43.200499  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:43.200504  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:43.200510  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:43 GMT
	I0108 21:38:43.200515  358628 round_trippers.go:580]     Audit-Id: 0821af05-bb85-4bae-9364-0d2e07b96127
	I0108 21:38:43.200520  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:43.200526  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:43.201145  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmjzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"fbfa39a4-ba62-4e31-8126-9a320311e846","resourceVersion":"754","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 21:38:43.396869  358628 request.go:629] Waited for 195.201855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:38:43.396934  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:38:43.396939  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:43.396947  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:43.396955  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:43.400264  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:38:43.400292  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:43.400309  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:43.400319  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:43 GMT
	I0108 21:38:43.400328  358628 round_trippers.go:580]     Audit-Id: 2f845e1c-9b24-4729-83d8-bb07b3f1d2b3
	I0108 21:38:43.400342  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:43.400350  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:43.400359  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:43.400778  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:38:43.401127  358628 pod_ready.go:92] pod "kube-proxy-bmjzs" in "kube-system" namespace has status "Ready":"True"
	I0108 21:38:43.401145  358628 pod_ready.go:81] duration metric: took 400.663415ms waiting for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:43.401156  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cpq6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:43.597322  358628 request.go:629] Waited for 196.081477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cpq6p
	I0108 21:38:43.597395  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cpq6p
	I0108 21:38:43.597404  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:43.597412  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:43.597418  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:43.600368  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:43.600392  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:43.600403  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:43.600412  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:43.600420  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:43.600428  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:43.600436  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:43 GMT
	I0108 21:38:43.600444  358628 round_trippers.go:580]     Audit-Id: 09388f5a-c74e-4002-89b5-7c36f3793460
	I0108 21:38:43.600915  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cpq6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"52634211-9ecd-4fd9-a8ce-88f67c668e75","resourceVersion":"717","creationTimestamp":"2024-01-08T21:28:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 21:38:43.796541  358628 request.go:629] Waited for 195.17972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:38:43.796617  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:38:43.796622  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:43.796631  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:43.796636  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:43.799229  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:43.799253  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:43.799264  358628 round_trippers.go:580]     Audit-Id: 2f878078-f71a-43d2-bcb9-8184d6f43b61
	I0108 21:38:43.799271  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:43.799276  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:43.799281  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:43.799289  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:43.799295  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:43 GMT
	I0108 21:38:43.799487  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m03","uid":"d31cb22f-3104-4da9-bd90-2f7e1fa3889a","resourceVersion":"1037","creationTimestamp":"2024-01-08T21:28:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_38_41_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0108 21:38:43.799805  358628 pod_ready.go:92] pod "kube-proxy-cpq6p" in "kube-system" namespace has status "Ready":"True"
	I0108 21:38:43.799824  358628 pod_ready.go:81] duration metric: took 398.653912ms waiting for pod "kube-proxy-cpq6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:43.799835  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:43.996938  358628 request.go:629] Waited for 197.023962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:38:43.997010  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:38:43.997015  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:43.997023  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:43.997029  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:43.999981  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:44.000002  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:44.000011  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:44.000016  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:44.000022  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:43 GMT
	I0108 21:38:44.000034  358628 round_trippers.go:580]     Audit-Id: 84a57f5e-1747-46ba-a609-be050fd1bdda
	I0108 21:38:44.000043  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:44.000050  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:44.000263  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-962345","namespace":"kube-system","uid":"3778c0a4-1528-4336-9f02-b77a2a6a1c34","resourceVersion":"873","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.mirror":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.seen":"2024-01-08T21:26:26.755431609Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 21:38:44.197147  358628 request.go:629] Waited for 196.364208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:38:44.197224  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:38:44.197229  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:44.197237  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:44.197243  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:44.200080  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:38:44.200107  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:44.200117  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:44.200125  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:44.200135  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:44.200143  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:44.200153  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:44 GMT
	I0108 21:38:44.200160  358628 round_trippers.go:580]     Audit-Id: 64a42c47-f504-433d-b48b-f750a0688b4a
	I0108 21:38:44.200426  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:38:44.200775  358628 pod_ready.go:92] pod "kube-scheduler-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:38:44.200794  358628 pod_ready.go:81] duration metric: took 400.948371ms waiting for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:38:44.200807  358628 pod_ready.go:38] duration metric: took 2.401269572s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:38:44.200835  358628 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:38:44.200883  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:38:44.217040  358628 system_svc.go:56] duration metric: took 16.195665ms WaitForService to wait for kubelet.
	I0108 21:38:44.217072  358628 kubeadm.go:581] duration metric: took 2.439113475s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:38:44.217106  358628 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:38:44.396514  358628 request.go:629] Waited for 179.296576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes
	I0108 21:38:44.396602  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes
	I0108 21:38:44.396616  358628 round_trippers.go:469] Request Headers:
	I0108 21:38:44.396627  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:38:44.396640  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:38:44.400206  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:38:44.400234  358628 round_trippers.go:577] Response Headers:
	I0108 21:38:44.400244  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:38:44.400253  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:38:44 GMT
	I0108 21:38:44.400261  358628 round_trippers.go:580]     Audit-Id: 6b8f3655-b5ae-40ab-a33b-edf2195fe900
	I0108 21:38:44.400269  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:38:44.400276  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:38:44.400284  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:38:44.400908  358628 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16211 chars]
	I0108 21:38:44.401765  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:38:44.401796  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:38:44.401811  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:38:44.401817  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:38:44.401822  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:38:44.401828  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:38:44.401834  358628 node_conditions.go:105] duration metric: took 184.722169ms to run NodePressure ...
	I0108 21:38:44.401853  358628 start.go:228] waiting for startup goroutines ...
	I0108 21:38:44.401917  358628 start.go:242] writing updated cluster config ...
	I0108 21:38:44.402507  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:38:44.402643  358628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:38:44.405677  358628 out.go:177] * Starting worker node multinode-962345-m03 in cluster multinode-962345
	I0108 21:38:44.407017  358628 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:38:44.407048  358628 cache.go:56] Caching tarball of preloaded images
	I0108 21:38:44.407151  358628 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:38:44.407166  358628 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:38:44.407311  358628 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/config.json ...
	I0108 21:38:44.407569  358628 start.go:365] acquiring machines lock for multinode-962345-m03: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:38:44.407632  358628 start.go:369] acquired machines lock for "multinode-962345-m03" in 35.018µs
	I0108 21:38:44.407653  358628 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:38:44.407660  358628 fix.go:54] fixHost starting: m03
	I0108 21:38:44.407968  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:38:44.408006  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:38:44.422771  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0108 21:38:44.423191  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:38:44.423658  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:38:44.423681  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:38:44.424062  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:38:44.424246  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .DriverName
	I0108 21:38:44.424399  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetState
	I0108 21:38:44.425817  358628 fix.go:102] recreateIfNeeded on multinode-962345-m03: state=Running err=<nil>
	W0108 21:38:44.425836  358628 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:38:44.427888  358628 out.go:177] * Updating the running kvm2 "multinode-962345-m03" VM ...
	I0108 21:38:44.429326  358628 machine.go:88] provisioning docker machine ...
	I0108 21:38:44.429374  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .DriverName
	I0108 21:38:44.429591  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetMachineName
	I0108 21:38:44.429770  358628 buildroot.go:166] provisioning hostname "multinode-962345-m03"
	I0108 21:38:44.429800  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetMachineName
	I0108 21:38:44.429934  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	I0108 21:38:44.432207  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.432616  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:38:44.432646  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.432810  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHPort
	I0108 21:38:44.432988  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:38:44.433116  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:38:44.433246  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHUsername
	I0108 21:38:44.433406  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:38:44.433734  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0108 21:38:44.433749  358628 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-962345-m03 && echo "multinode-962345-m03" | sudo tee /etc/hostname
	I0108 21:38:44.581573  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-962345-m03
	
	I0108 21:38:44.581612  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	I0108 21:38:44.584829  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.585160  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:38:44.585196  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.585414  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHPort
	I0108 21:38:44.585650  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:38:44.585797  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:38:44.585957  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHUsername
	I0108 21:38:44.586083  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:38:44.586418  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0108 21:38:44.586436  358628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-962345-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-962345-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-962345-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:38:44.716086  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:38:44.716116  358628 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 21:38:44.716132  358628 buildroot.go:174] setting up certificates
	I0108 21:38:44.716143  358628 provision.go:83] configureAuth start
	I0108 21:38:44.716156  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetMachineName
	I0108 21:38:44.716494  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetIP
	I0108 21:38:44.719140  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.719461  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:38:44.719483  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.719674  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	I0108 21:38:44.721709  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.722037  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:38:44.722085  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.722188  358628 provision.go:138] copyHostCerts
	I0108 21:38:44.722228  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:38:44.722259  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 21:38:44.722269  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:38:44.722337  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 21:38:44.722408  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:38:44.722425  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 21:38:44.722432  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:38:44.722454  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 21:38:44.722495  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:38:44.722510  358628 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 21:38:44.722516  358628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:38:44.722535  358628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 21:38:44.722592  358628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.multinode-962345-m03 san=[192.168.39.120 192.168.39.120 localhost 127.0.0.1 minikube multinode-962345-m03]
	I0108 21:38:44.843946  358628 provision.go:172] copyRemoteCerts
	I0108 21:38:44.844006  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:38:44.844030  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	I0108 21:38:44.846647  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.846984  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:38:44.847010  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:44.847204  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHPort
	I0108 21:38:44.847392  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:38:44.847570  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHUsername
	I0108 21:38:44.847727  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m03/id_rsa Username:docker}
	I0108 21:38:44.944712  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:38:44.944795  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:38:44.967870  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:38:44.967970  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:38:44.990492  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:38:44.990558  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:38:45.012505  358628 provision.go:86] duration metric: configureAuth took 296.348778ms
	I0108 21:38:45.012535  358628 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:38:45.012741  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:38:45.012826  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	I0108 21:38:45.015516  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:45.015879  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:38:45.015912  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:38:45.016196  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHPort
	I0108 21:38:45.016364  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:38:45.016576  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:38:45.016733  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHUsername
	I0108 21:38:45.016944  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:38:45.017318  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0108 21:38:45.017337  358628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:40:15.616463  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:40:15.616510  358628 machine.go:91] provisioned docker machine in 1m31.187166749s
	I0108 21:40:15.616525  358628 start.go:300] post-start starting for "multinode-962345-m03" (driver="kvm2")
	I0108 21:40:15.616541  358628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:40:15.616569  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .DriverName
	I0108 21:40:15.616911  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:40:15.616948  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	I0108 21:40:15.620128  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.620553  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:40:15.620581  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.620782  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHPort
	I0108 21:40:15.621000  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:40:15.621149  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHUsername
	I0108 21:40:15.621301  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m03/id_rsa Username:docker}
	I0108 21:40:15.717839  358628 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:40:15.722071  358628 command_runner.go:130] > NAME=Buildroot
	I0108 21:40:15.722100  358628 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0108 21:40:15.722105  358628 command_runner.go:130] > ID=buildroot
	I0108 21:40:15.722111  358628 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:40:15.722116  358628 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:40:15.722149  358628 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:40:15.722164  358628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 21:40:15.722247  358628 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 21:40:15.722330  358628 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 21:40:15.722342  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /etc/ssl/certs/3419822.pem
	I0108 21:40:15.722437  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:40:15.730632  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:40:15.755750  358628 start.go:303] post-start completed in 139.204345ms
	I0108 21:40:15.755778  358628 fix.go:56] fixHost completed within 1m31.348117242s
	I0108 21:40:15.755807  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	I0108 21:40:15.758421  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.758811  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:40:15.758846  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.759053  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHPort
	I0108 21:40:15.759246  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:40:15.759374  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:40:15.759496  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHUsername
	I0108 21:40:15.759706  358628 main.go:141] libmachine: Using SSH client type: native
	I0108 21:40:15.760065  358628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I0108 21:40:15.760081  358628 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:40:15.888017  358628 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704750015.880069613
	
	I0108 21:40:15.888044  358628 fix.go:206] guest clock: 1704750015.880069613
	I0108 21:40:15.888061  358628 fix.go:219] Guest: 2024-01-08 21:40:15.880069613 +0000 UTC Remote: 2024-01-08 21:40:15.755784275 +0000 UTC m=+549.479418851 (delta=124.285338ms)
	I0108 21:40:15.888095  358628 fix.go:190] guest clock delta is within tolerance: 124.285338ms
	I0108 21:40:15.888104  358628 start.go:83] releasing machines lock for "multinode-962345-m03", held for 1m31.480459099s
	I0108 21:40:15.888125  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .DriverName
	I0108 21:40:15.888379  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetIP
	I0108 21:40:15.891323  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.891739  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:40:15.891771  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.893756  358628 out.go:177] * Found network options:
	I0108 21:40:15.895182  358628 out.go:177]   - NO_PROXY=192.168.39.239,192.168.39.111
	W0108 21:40:15.896528  358628 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:40:15.896551  358628 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:40:15.896564  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .DriverName
	I0108 21:40:15.897341  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .DriverName
	I0108 21:40:15.897568  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .DriverName
	I0108 21:40:15.897688  358628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:40:15.897731  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	W0108 21:40:15.897768  358628 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 21:40:15.897794  358628 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:40:15.897875  358628 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:40:15.897900  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHHostname
	I0108 21:40:15.900401  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.900779  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.900814  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:40:15.900837  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.900966  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHPort
	I0108 21:40:15.901157  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:40:15.901243  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:40:15.901282  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:15.901332  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHUsername
	I0108 21:40:15.901449  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHPort
	I0108 21:40:15.901521  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m03/id_rsa Username:docker}
	I0108 21:40:15.901618  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHKeyPath
	I0108 21:40:15.901783  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetSSHUsername
	I0108 21:40:15.901922  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m03/id_rsa Username:docker}
	I0108 21:40:16.147579  358628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:40:16.147581  358628 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:40:16.153582  358628 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 21:40:16.153700  358628 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:40:16.153766  358628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:40:16.162541  358628 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 21:40:16.162567  358628 start.go:475] detecting cgroup driver to use...
	I0108 21:40:16.162655  358628 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:40:16.175892  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:40:16.188517  358628 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:40:16.188578  358628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:40:16.202347  358628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:40:16.215111  358628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:40:16.343735  358628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:40:16.463673  358628 docker.go:219] disabling docker service ...
	I0108 21:40:16.463760  358628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:40:16.477920  358628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:40:16.490514  358628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:40:16.606891  358628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:40:16.725902  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:40:16.738809  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:40:16.756677  358628 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 21:40:16.757028  358628 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:40:16.757097  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:40:16.766429  358628 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:40:16.766517  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:40:16.775708  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:40:16.784929  358628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:40:16.794177  358628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:40:16.803545  358628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:40:16.812583  358628 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:40:16.812760  358628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:40:16.821890  358628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:40:16.941755  358628 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:40:23.983282  358628 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.041481216s)
	I0108 21:40:23.983328  358628 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:40:23.983410  358628 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:40:23.988285  358628 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 21:40:23.988316  358628 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:40:23.988325  358628 command_runner.go:130] > Device: 16h/22d	Inode: 1220        Links: 1
	I0108 21:40:23.988335  358628 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:40:23.988343  358628 command_runner.go:130] > Access: 2024-01-08 21:40:23.906549577 +0000
	I0108 21:40:23.988352  358628 command_runner.go:130] > Modify: 2024-01-08 21:40:23.906549577 +0000
	I0108 21:40:23.988360  358628 command_runner.go:130] > Change: 2024-01-08 21:40:23.906549577 +0000
	I0108 21:40:23.988370  358628 command_runner.go:130] >  Birth: -
	I0108 21:40:23.988719  358628 start.go:543] Will wait 60s for crictl version
	I0108 21:40:23.988779  358628 ssh_runner.go:195] Run: which crictl
	I0108 21:40:23.992163  358628 command_runner.go:130] > /usr/bin/crictl
	I0108 21:40:23.992425  358628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:40:24.042097  358628 command_runner.go:130] > Version:  0.1.0
	I0108 21:40:24.042122  358628 command_runner.go:130] > RuntimeName:  cri-o
	I0108 21:40:24.042130  358628 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 21:40:24.042137  358628 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:40:24.042323  358628 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:40:24.042425  358628 ssh_runner.go:195] Run: crio --version
	I0108 21:40:24.092681  358628 command_runner.go:130] > crio version 1.24.1
	I0108 21:40:24.092706  358628 command_runner.go:130] > Version:          1.24.1
	I0108 21:40:24.092713  358628 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:40:24.092718  358628 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:40:24.092731  358628 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:40:24.092736  358628 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:40:24.092740  358628 command_runner.go:130] > Compiler:         gc
	I0108 21:40:24.092744  358628 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:40:24.092750  358628 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:40:24.092757  358628 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:40:24.092762  358628 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:40:24.092766  358628 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:40:24.094310  358628 ssh_runner.go:195] Run: crio --version
	I0108 21:40:24.142869  358628 command_runner.go:130] > crio version 1.24.1
	I0108 21:40:24.142899  358628 command_runner.go:130] > Version:          1.24.1
	I0108 21:40:24.142906  358628 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 21:40:24.142911  358628 command_runner.go:130] > GitTreeState:     dirty
	I0108 21:40:24.142917  358628 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0108 21:40:24.142922  358628 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 21:40:24.142926  358628 command_runner.go:130] > Compiler:         gc
	I0108 21:40:24.142931  358628 command_runner.go:130] > Platform:         linux/amd64
	I0108 21:40:24.142937  358628 command_runner.go:130] > Linkmode:         dynamic
	I0108 21:40:24.142952  358628 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 21:40:24.142960  358628 command_runner.go:130] > SeccompEnabled:   true
	I0108 21:40:24.142966  358628 command_runner.go:130] > AppArmorEnabled:  false
	I0108 21:40:24.144898  358628 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:40:24.146428  358628 out.go:177]   - env NO_PROXY=192.168.39.239
	I0108 21:40:24.147829  358628 out.go:177]   - env NO_PROXY=192.168.39.239,192.168.39.111
	I0108 21:40:24.149098  358628 main.go:141] libmachine: (multinode-962345-m03) Calling .GetIP
	I0108 21:40:24.151884  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:24.152226  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2a:3f", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:28:47 +0000 UTC Type:0 Mac:52:54:00:01:2a:3f Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-962345-m03 Clientid:01:52:54:00:01:2a:3f}
	I0108 21:40:24.152265  358628 main.go:141] libmachine: (multinode-962345-m03) DBG | domain multinode-962345-m03 has defined IP address 192.168.39.120 and MAC address 52:54:00:01:2a:3f in network mk-multinode-962345
	I0108 21:40:24.152517  358628 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:40:24.157060  358628 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0108 21:40:24.157135  358628 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345 for IP: 192.168.39.120
	I0108 21:40:24.157154  358628 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:40:24.157302  358628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 21:40:24.157341  358628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 21:40:24.157356  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:40:24.157370  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:40:24.157379  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:40:24.157394  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:40:24.157465  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 21:40:24.157495  358628 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 21:40:24.157505  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:40:24.157529  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:40:24.157554  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:40:24.157576  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 21:40:24.157615  358628 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:40:24.157641  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:40:24.157654  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem -> /usr/share/ca-certificates/341982.pem
	I0108 21:40:24.157667  358628 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> /usr/share/ca-certificates/3419822.pem
	I0108 21:40:24.158000  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:40:24.183384  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:40:24.204392  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:40:24.228351  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:40:24.252148  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:40:24.276571  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 21:40:24.299276  358628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 21:40:24.324072  358628 ssh_runner.go:195] Run: openssl version
	I0108 21:40:24.329794  358628 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:40:24.329989  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 21:40:24.340278  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 21:40:24.344831  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:40:24.344863  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:40:24.344911  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 21:40:24.350784  358628 command_runner.go:130] > 3ec20f2e
	I0108 21:40:24.351055  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:40:24.360300  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:40:24.370248  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:40:24.374586  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:40:24.374795  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:40:24.374847  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:40:24.380141  358628 command_runner.go:130] > b5213941
	I0108 21:40:24.380230  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:40:24.388619  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 21:40:24.398065  358628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 21:40:24.402271  358628 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:40:24.402703  358628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:40:24.402759  358628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 21:40:24.407706  358628 command_runner.go:130] > 51391683
	I0108 21:40:24.408026  358628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 21:40:24.416657  358628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:40:24.420973  358628 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:40:24.421020  358628 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:40:24.421110  358628 ssh_runner.go:195] Run: crio config
	I0108 21:40:24.487007  358628 command_runner.go:130] ! time="2024-01-08 21:40:24.479154731Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 21:40:24.487126  358628 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 21:40:24.495645  358628 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 21:40:24.495678  358628 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 21:40:24.495691  358628 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 21:40:24.495696  358628 command_runner.go:130] > #
	I0108 21:40:24.495713  358628 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 21:40:24.495723  358628 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 21:40:24.495736  358628 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 21:40:24.495755  358628 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 21:40:24.495765  358628 command_runner.go:130] > # reload'.
	I0108 21:40:24.495777  358628 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 21:40:24.495790  358628 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 21:40:24.495803  358628 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 21:40:24.495811  358628 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 21:40:24.495815  358628 command_runner.go:130] > [crio]
	I0108 21:40:24.495822  358628 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 21:40:24.495830  358628 command_runner.go:130] > # containers images, in this directory.
	I0108 21:40:24.495837  358628 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 21:40:24.495846  358628 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 21:40:24.495853  358628 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 21:40:24.495860  358628 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 21:40:24.495868  358628 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 21:40:24.495876  358628 command_runner.go:130] > storage_driver = "overlay"
	I0108 21:40:24.495882  358628 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 21:40:24.495890  358628 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 21:40:24.495897  358628 command_runner.go:130] > storage_option = [
	I0108 21:40:24.495902  358628 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 21:40:24.495907  358628 command_runner.go:130] > ]
	I0108 21:40:24.495914  358628 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 21:40:24.495922  358628 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 21:40:24.495927  358628 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 21:40:24.495934  358628 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 21:40:24.495942  358628 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 21:40:24.495947  358628 command_runner.go:130] > # always happen on a node reboot
	I0108 21:40:24.495954  358628 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 21:40:24.495959  358628 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 21:40:24.495967  358628 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 21:40:24.495977  358628 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 21:40:24.495984  358628 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 21:40:24.495991  358628 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 21:40:24.496001  358628 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 21:40:24.496011  358628 command_runner.go:130] > # internal_wipe = true
	I0108 21:40:24.496019  358628 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 21:40:24.496026  358628 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 21:40:24.496034  358628 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 21:40:24.496045  358628 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 21:40:24.496057  358628 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 21:40:24.496067  358628 command_runner.go:130] > [crio.api]
	I0108 21:40:24.496079  358628 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 21:40:24.496090  358628 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 21:40:24.496102  358628 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 21:40:24.496112  358628 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 21:40:24.496125  358628 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 21:40:24.496136  358628 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 21:40:24.496145  358628 command_runner.go:130] > # stream_port = "0"
	I0108 21:40:24.496156  358628 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 21:40:24.496164  358628 command_runner.go:130] > # stream_enable_tls = false
	I0108 21:40:24.496173  358628 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 21:40:24.496178  358628 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 21:40:24.496186  358628 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 21:40:24.496195  358628 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 21:40:24.496201  358628 command_runner.go:130] > # minutes.
	I0108 21:40:24.496205  358628 command_runner.go:130] > # stream_tls_cert = ""
	I0108 21:40:24.496214  358628 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 21:40:24.496224  358628 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 21:40:24.496231  358628 command_runner.go:130] > # stream_tls_key = ""
	I0108 21:40:24.496237  358628 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 21:40:24.496245  358628 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 21:40:24.496251  358628 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 21:40:24.496258  358628 command_runner.go:130] > # stream_tls_ca = ""
	I0108 21:40:24.496266  358628 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:40:24.496273  358628 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 21:40:24.496280  358628 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 21:40:24.496287  358628 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 21:40:24.496302  358628 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 21:40:24.496310  358628 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 21:40:24.496316  358628 command_runner.go:130] > [crio.runtime]
	I0108 21:40:24.496324  358628 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 21:40:24.496331  358628 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 21:40:24.496336  358628 command_runner.go:130] > # "nofile=1024:2048"
	I0108 21:40:24.496353  358628 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 21:40:24.496360  358628 command_runner.go:130] > # default_ulimits = [
	I0108 21:40:24.496363  358628 command_runner.go:130] > # ]
	I0108 21:40:24.496372  358628 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 21:40:24.496379  358628 command_runner.go:130] > # no_pivot = false
	I0108 21:40:24.496385  358628 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 21:40:24.496393  358628 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 21:40:24.496400  358628 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 21:40:24.496408  358628 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 21:40:24.496416  358628 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 21:40:24.496422  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:40:24.496429  358628 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 21:40:24.496434  358628 command_runner.go:130] > # Cgroup setting for conmon
	I0108 21:40:24.496443  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 21:40:24.496450  358628 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 21:40:24.496457  358628 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 21:40:24.496464  358628 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 21:40:24.496473  358628 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 21:40:24.496479  358628 command_runner.go:130] > conmon_env = [
	I0108 21:40:24.496486  358628 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 21:40:24.496491  358628 command_runner.go:130] > ]
	I0108 21:40:24.496497  358628 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 21:40:24.496504  358628 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 21:40:24.496511  358628 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 21:40:24.496517  358628 command_runner.go:130] > # default_env = [
	I0108 21:40:24.496521  358628 command_runner.go:130] > # ]
	I0108 21:40:24.496526  358628 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 21:40:24.496531  358628 command_runner.go:130] > # selinux = false
	I0108 21:40:24.496537  358628 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 21:40:24.496545  358628 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 21:40:24.496555  358628 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 21:40:24.496562  358628 command_runner.go:130] > # seccomp_profile = ""
	I0108 21:40:24.496573  358628 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 21:40:24.496582  358628 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 21:40:24.496590  358628 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 21:40:24.496597  358628 command_runner.go:130] > # which might increase security.
	I0108 21:40:24.496601  358628 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 21:40:24.496610  358628 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 21:40:24.496619  358628 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 21:40:24.496628  358628 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 21:40:24.496636  358628 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 21:40:24.496643  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:40:24.496648  358628 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 21:40:24.496655  358628 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 21:40:24.496662  358628 command_runner.go:130] > # the cgroup blockio controller.
	I0108 21:40:24.496666  358628 command_runner.go:130] > # blockio_config_file = ""
	I0108 21:40:24.496675  358628 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 21:40:24.496681  358628 command_runner.go:130] > # irqbalance daemon.
	I0108 21:40:24.496687  358628 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 21:40:24.496695  358628 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 21:40:24.496703  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:40:24.496709  358628 command_runner.go:130] > # rdt_config_file = ""
	I0108 21:40:24.496715  358628 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 21:40:24.496722  358628 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 21:40:24.496728  358628 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 21:40:24.496734  358628 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 21:40:24.496741  358628 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 21:40:24.496749  358628 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 21:40:24.496755  358628 command_runner.go:130] > # will be added.
	I0108 21:40:24.496759  358628 command_runner.go:130] > # default_capabilities = [
	I0108 21:40:24.496765  358628 command_runner.go:130] > # 	"CHOWN",
	I0108 21:40:24.496769  358628 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 21:40:24.496776  358628 command_runner.go:130] > # 	"FSETID",
	I0108 21:40:24.496780  358628 command_runner.go:130] > # 	"FOWNER",
	I0108 21:40:24.496786  358628 command_runner.go:130] > # 	"SETGID",
	I0108 21:40:24.496790  358628 command_runner.go:130] > # 	"SETUID",
	I0108 21:40:24.496796  358628 command_runner.go:130] > # 	"SETPCAP",
	I0108 21:40:24.496800  358628 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 21:40:24.496806  358628 command_runner.go:130] > # 	"KILL",
	I0108 21:40:24.496811  358628 command_runner.go:130] > # ]
	I0108 21:40:24.496819  358628 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 21:40:24.496827  358628 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:40:24.496833  358628 command_runner.go:130] > # default_sysctls = [
	I0108 21:40:24.496837  358628 command_runner.go:130] > # ]
	I0108 21:40:24.496842  358628 command_runner.go:130] > # List of devices on the host that a
	I0108 21:40:24.496850  358628 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 21:40:24.496856  358628 command_runner.go:130] > # allowed_devices = [
	I0108 21:40:24.496860  358628 command_runner.go:130] > # 	"/dev/fuse",
	I0108 21:40:24.496866  358628 command_runner.go:130] > # ]
	I0108 21:40:24.496871  358628 command_runner.go:130] > # List of additional devices. specified as
	I0108 21:40:24.496881  358628 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 21:40:24.496888  358628 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 21:40:24.496906  358628 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 21:40:24.496913  358628 command_runner.go:130] > # additional_devices = [
	I0108 21:40:24.496916  358628 command_runner.go:130] > # ]
	I0108 21:40:24.496921  358628 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 21:40:24.496928  358628 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 21:40:24.496932  358628 command_runner.go:130] > # 	"/etc/cdi",
	I0108 21:40:24.496938  358628 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 21:40:24.496942  358628 command_runner.go:130] > # ]
	I0108 21:40:24.496950  358628 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 21:40:24.496958  358628 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 21:40:24.496963  358628 command_runner.go:130] > # Defaults to false.
	I0108 21:40:24.496968  358628 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 21:40:24.496978  358628 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 21:40:24.496986  358628 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 21:40:24.496991  358628 command_runner.go:130] > # hooks_dir = [
	I0108 21:40:24.496996  358628 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 21:40:24.497001  358628 command_runner.go:130] > # ]
	I0108 21:40:24.497007  358628 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 21:40:24.497016  358628 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 21:40:24.497023  358628 command_runner.go:130] > # its default mounts from the following two files:
	I0108 21:40:24.497027  358628 command_runner.go:130] > #
	I0108 21:40:24.497034  358628 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 21:40:24.497047  358628 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 21:40:24.497060  358628 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 21:40:24.497069  358628 command_runner.go:130] > #
	I0108 21:40:24.497081  358628 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 21:40:24.497094  358628 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 21:40:24.497107  358628 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 21:40:24.497118  358628 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 21:40:24.497126  358628 command_runner.go:130] > #
	I0108 21:40:24.497135  358628 command_runner.go:130] > # default_mounts_file = ""
	I0108 21:40:24.497143  358628 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 21:40:24.497150  358628 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 21:40:24.497156  358628 command_runner.go:130] > pids_limit = 1024
	I0108 21:40:24.497163  358628 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 21:40:24.497172  358628 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 21:40:24.497180  358628 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 21:40:24.497190  358628 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 21:40:24.497197  358628 command_runner.go:130] > # log_size_max = -1
	I0108 21:40:24.497203  358628 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 21:40:24.497209  358628 command_runner.go:130] > # log_to_journald = false
	I0108 21:40:24.497216  358628 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 21:40:24.497223  358628 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 21:40:24.497229  358628 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 21:40:24.497236  358628 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 21:40:24.497241  358628 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 21:40:24.497248  358628 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 21:40:24.497253  358628 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 21:40:24.497259  358628 command_runner.go:130] > # read_only = false
	I0108 21:40:24.497265  358628 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 21:40:24.497273  358628 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 21:40:24.497280  358628 command_runner.go:130] > # live configuration reload.
	I0108 21:40:24.497284  358628 command_runner.go:130] > # log_level = "info"
	I0108 21:40:24.497292  358628 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 21:40:24.497297  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:40:24.497303  358628 command_runner.go:130] > # log_filter = ""
	I0108 21:40:24.497309  358628 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 21:40:24.497317  358628 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 21:40:24.497323  358628 command_runner.go:130] > # separated by comma.
	I0108 21:40:24.497329  358628 command_runner.go:130] > # uid_mappings = ""
	I0108 21:40:24.497337  358628 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 21:40:24.497346  358628 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 21:40:24.497352  358628 command_runner.go:130] > # separated by comma.
	I0108 21:40:24.497356  358628 command_runner.go:130] > # gid_mappings = ""
	I0108 21:40:24.497364  358628 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 21:40:24.497372  358628 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:40:24.497380  358628 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:40:24.497387  358628 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 21:40:24.497393  358628 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 21:40:24.497401  358628 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 21:40:24.497407  358628 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 21:40:24.497414  358628 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 21:40:24.497420  358628 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 21:40:24.497428  358628 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 21:40:24.497436  358628 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 21:40:24.497443  358628 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 21:40:24.497448  358628 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 21:40:24.497457  358628 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 21:40:24.497464  358628 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 21:40:24.497469  358628 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 21:40:24.497477  358628 command_runner.go:130] > drop_infra_ctr = false
	I0108 21:40:24.497483  358628 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 21:40:24.497491  358628 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 21:40:24.497498  358628 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 21:40:24.497504  358628 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 21:40:24.497511  358628 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 21:40:24.497518  358628 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 21:40:24.497524  358628 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 21:40:24.497531  358628 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 21:40:24.497537  358628 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 21:40:24.497543  358628 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 21:40:24.497552  358628 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 21:40:24.497560  358628 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 21:40:24.497570  358628 command_runner.go:130] > # default_runtime = "runc"
	I0108 21:40:24.497578  358628 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 21:40:24.497586  358628 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 21:40:24.497597  358628 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 21:40:24.497604  358628 command_runner.go:130] > # creation as a file is not desired either.
	I0108 21:40:24.497612  358628 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 21:40:24.497620  358628 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 21:40:24.497625  358628 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 21:40:24.497628  358628 command_runner.go:130] > # ]
	I0108 21:40:24.497637  358628 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 21:40:24.497644  358628 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 21:40:24.497652  358628 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 21:40:24.497661  358628 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 21:40:24.497666  358628 command_runner.go:130] > #
	I0108 21:40:24.497671  358628 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 21:40:24.497678  358628 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 21:40:24.497683  358628 command_runner.go:130] > #  runtime_type = "oci"
	I0108 21:40:24.497689  358628 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 21:40:24.497694  358628 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 21:40:24.497701  358628 command_runner.go:130] > #  allowed_annotations = []
	I0108 21:40:24.497705  358628 command_runner.go:130] > # Where:
	I0108 21:40:24.497713  358628 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 21:40:24.497719  358628 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 21:40:24.497727  358628 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 21:40:24.497733  358628 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 21:40:24.497739  358628 command_runner.go:130] > #   in $PATH.
	I0108 21:40:24.497746  358628 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 21:40:24.497753  358628 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 21:40:24.497760  358628 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 21:40:24.497767  358628 command_runner.go:130] > #   state.
	I0108 21:40:24.497773  358628 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 21:40:24.497781  358628 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 21:40:24.497790  358628 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 21:40:24.497797  358628 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 21:40:24.497805  358628 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 21:40:24.497814  358628 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 21:40:24.497819  358628 command_runner.go:130] > #   The currently recognized values are:
	I0108 21:40:24.497828  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 21:40:24.497837  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 21:40:24.497845  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 21:40:24.497853  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 21:40:24.497861  358628 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 21:40:24.497870  358628 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 21:40:24.497878  358628 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 21:40:24.497887  358628 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 21:40:24.497895  358628 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 21:40:24.497902  358628 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 21:40:24.497906  358628 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 21:40:24.497913  358628 command_runner.go:130] > runtime_type = "oci"
	I0108 21:40:24.497917  358628 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 21:40:24.497921  358628 command_runner.go:130] > runtime_config_path = ""
	I0108 21:40:24.497927  358628 command_runner.go:130] > monitor_path = ""
	I0108 21:40:24.497931  358628 command_runner.go:130] > monitor_cgroup = ""
	I0108 21:40:24.497938  358628 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 21:40:24.497944  358628 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 21:40:24.497950  358628 command_runner.go:130] > # running containers
	I0108 21:40:24.497955  358628 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 21:40:24.497963  358628 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 21:40:24.497996  358628 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 21:40:24.498006  358628 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 21:40:24.498011  358628 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 21:40:24.498015  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 21:40:24.498020  358628 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 21:40:24.498027  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 21:40:24.498033  358628 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 21:40:24.498044  358628 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 21:40:24.498057  358628 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 21:40:24.498070  358628 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 21:40:24.498083  358628 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 21:40:24.498097  358628 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 21:40:24.498113  358628 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 21:40:24.498125  358628 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 21:40:24.498138  358628 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 21:40:24.498154  358628 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 21:40:24.498168  358628 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 21:40:24.498183  358628 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 21:40:24.498192  358628 command_runner.go:130] > # Example:
	I0108 21:40:24.498203  358628 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 21:40:24.498214  358628 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 21:40:24.498223  358628 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 21:40:24.498231  358628 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 21:40:24.498235  358628 command_runner.go:130] > # cpuset = 0
	I0108 21:40:24.498240  358628 command_runner.go:130] > # cpushares = "0-1"
	I0108 21:40:24.498244  358628 command_runner.go:130] > # Where:
	I0108 21:40:24.498249  358628 command_runner.go:130] > # The workload name is workload-type.
	I0108 21:40:24.498258  358628 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 21:40:24.498266  358628 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 21:40:24.498274  358628 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 21:40:24.498282  358628 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 21:40:24.498290  358628 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 21:40:24.498295  358628 command_runner.go:130] > # 
	I0108 21:40:24.498301  358628 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 21:40:24.498308  358628 command_runner.go:130] > #
	I0108 21:40:24.498314  358628 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 21:40:24.498322  358628 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 21:40:24.498329  358628 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 21:40:24.498338  358628 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 21:40:24.498345  358628 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 21:40:24.498349  358628 command_runner.go:130] > [crio.image]
	I0108 21:40:24.498357  358628 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 21:40:24.498361  358628 command_runner.go:130] > # default_transport = "docker://"
	I0108 21:40:24.498369  358628 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 21:40:24.498377  358628 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:40:24.498383  358628 command_runner.go:130] > # global_auth_file = ""
	I0108 21:40:24.498388  358628 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 21:40:24.498396  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:40:24.498403  358628 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 21:40:24.498409  358628 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 21:40:24.498417  358628 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 21:40:24.498423  358628 command_runner.go:130] > # This option supports live configuration reload.
	I0108 21:40:24.498429  358628 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 21:40:24.498437  358628 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 21:40:24.498445  358628 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 21:40:24.498454  358628 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 21:40:24.498462  358628 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 21:40:24.498469  358628 command_runner.go:130] > # pause_command = "/pause"
	I0108 21:40:24.498475  358628 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 21:40:24.498483  358628 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 21:40:24.498492  358628 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 21:40:24.498499  358628 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 21:40:24.498506  358628 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 21:40:24.498513  358628 command_runner.go:130] > # signature_policy = ""
	I0108 21:40:24.498519  358628 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 21:40:24.498527  358628 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 21:40:24.498531  358628 command_runner.go:130] > # changing them here.
	I0108 21:40:24.498537  358628 command_runner.go:130] > # insecure_registries = [
	I0108 21:40:24.498541  358628 command_runner.go:130] > # ]
	I0108 21:40:24.498551  358628 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 21:40:24.498559  358628 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 21:40:24.498567  358628 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 21:40:24.498574  358628 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 21:40:24.498579  358628 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 21:40:24.498587  358628 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 21:40:24.498594  358628 command_runner.go:130] > # CNI plugins.
	I0108 21:40:24.498598  358628 command_runner.go:130] > [crio.network]
	I0108 21:40:24.498606  358628 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 21:40:24.498612  358628 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 21:40:24.498618  358628 command_runner.go:130] > # cni_default_network = ""
	I0108 21:40:24.498624  358628 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 21:40:24.498631  358628 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 21:40:24.498637  358628 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 21:40:24.498643  358628 command_runner.go:130] > # plugin_dirs = [
	I0108 21:40:24.498647  358628 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 21:40:24.498653  358628 command_runner.go:130] > # ]
	I0108 21:40:24.498659  358628 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 21:40:24.498666  358628 command_runner.go:130] > [crio.metrics]
	I0108 21:40:24.498672  358628 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 21:40:24.498678  358628 command_runner.go:130] > enable_metrics = true
	I0108 21:40:24.498684  358628 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 21:40:24.498691  358628 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 21:40:24.498697  358628 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 21:40:24.498705  358628 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 21:40:24.498713  358628 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 21:40:24.498719  358628 command_runner.go:130] > # metrics_collectors = [
	I0108 21:40:24.498723  358628 command_runner.go:130] > # 	"operations",
	I0108 21:40:24.498730  358628 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 21:40:24.498735  358628 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 21:40:24.498740  358628 command_runner.go:130] > # 	"operations_errors",
	I0108 21:40:24.498747  358628 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 21:40:24.498751  358628 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 21:40:24.498758  358628 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 21:40:24.498763  358628 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 21:40:24.498769  358628 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 21:40:24.498774  358628 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 21:40:24.498780  358628 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 21:40:24.498784  358628 command_runner.go:130] > # 	"containers_oom_total",
	I0108 21:40:24.498791  358628 command_runner.go:130] > # 	"containers_oom",
	I0108 21:40:24.498795  358628 command_runner.go:130] > # 	"processes_defunct",
	I0108 21:40:24.498802  358628 command_runner.go:130] > # 	"operations_total",
	I0108 21:40:24.498806  358628 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 21:40:24.498813  358628 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 21:40:24.498817  358628 command_runner.go:130] > # 	"operations_errors_total",
	I0108 21:40:24.498822  358628 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 21:40:24.498829  358628 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 21:40:24.498833  358628 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 21:40:24.498840  358628 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 21:40:24.498845  358628 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 21:40:24.498852  358628 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 21:40:24.498855  358628 command_runner.go:130] > # ]
	I0108 21:40:24.498863  358628 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 21:40:24.498867  358628 command_runner.go:130] > # metrics_port = 9090
	I0108 21:40:24.498874  358628 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 21:40:24.498879  358628 command_runner.go:130] > # metrics_socket = ""
	I0108 21:40:24.498887  358628 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 21:40:24.498893  358628 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 21:40:24.498901  358628 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 21:40:24.498906  358628 command_runner.go:130] > # certificate on any modification event.
	I0108 21:40:24.498912  358628 command_runner.go:130] > # metrics_cert = ""
	I0108 21:40:24.498917  358628 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 21:40:24.498924  358628 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 21:40:24.498928  358628 command_runner.go:130] > # metrics_key = ""
	I0108 21:40:24.498936  358628 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 21:40:24.498942  358628 command_runner.go:130] > [crio.tracing]
	I0108 21:40:24.498948  358628 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 21:40:24.498954  358628 command_runner.go:130] > # enable_tracing = false
	I0108 21:40:24.498960  358628 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 21:40:24.498966  358628 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 21:40:24.498972  358628 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 21:40:24.498978  358628 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 21:40:24.498984  358628 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 21:40:24.498989  358628 command_runner.go:130] > [crio.stats]
	I0108 21:40:24.498996  358628 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 21:40:24.499003  358628 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 21:40:24.499010  358628 command_runner.go:130] > # stats_collection_period = 0
	I0108 21:40:24.499090  358628 cni.go:84] Creating CNI manager for ""
	I0108 21:40:24.499104  358628 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:40:24.499115  358628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:40:24.499144  358628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-962345 NodeName:multinode-962345-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:40:24.499272  358628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-962345-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:40:24.499322  358628 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-962345-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:40:24.499395  358628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:40:24.508311  358628 command_runner.go:130] > kubeadm
	I0108 21:40:24.508338  358628 command_runner.go:130] > kubectl
	I0108 21:40:24.508344  358628 command_runner.go:130] > kubelet
	I0108 21:40:24.508591  358628 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:40:24.508660  358628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 21:40:24.517855  358628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0108 21:40:24.534167  358628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:40:24.550570  358628 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0108 21:40:24.554422  358628 command_runner.go:130] > 192.168.39.239	control-plane.minikube.internal
	I0108 21:40:24.554585  358628 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:40:24.554849  358628 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:40:24.554924  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:40:24.554973  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:40:24.570310  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0108 21:40:24.570776  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:40:24.571238  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:40:24.571261  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:40:24.571651  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:40:24.571837  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:40:24.571991  358628 start.go:304] JoinCluster: &{Name:multinode-962345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-962345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:40:24.572123  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 21:40:24.572140  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:40:24.574902  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:40:24.575336  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:40:24.575376  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:40:24.575520  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:40:24.575701  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:40:24.575833  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:40:24.575939  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:40:24.771029  358628 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yq2i1m.e9m7zch51kgnsapb --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 21:40:24.772739  358628 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 21:40:24.772781  358628 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:40:24.773136  358628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:40:24.773184  358628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:40:24.787763  358628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
	I0108 21:40:24.788267  358628 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:40:24.788781  358628 main.go:141] libmachine: Using API Version  1
	I0108 21:40:24.788803  358628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:40:24.789113  358628 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:40:24.789338  358628 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:40:24.789510  358628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-962345-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 21:40:24.789535  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:40:24.792165  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:40:24.792614  358628 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:36:17 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:40:24.792635  358628 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:40:24.792769  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:40:24.792939  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:40:24.793101  358628 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:40:24.793229  358628 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:40:24.990160  358628 command_runner.go:130] > node/multinode-962345-m03 cordoned
	I0108 21:40:28.027531  358628 command_runner.go:130] > pod "busybox-5bc68d56bd-spk2c" has DeletionTimestamp older than 1 seconds, skipping
	I0108 21:40:28.027557  358628 command_runner.go:130] > node/multinode-962345-m03 drained
	I0108 21:40:28.029364  358628 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 21:40:28.029380  358628 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-psmlz, kube-system/kube-proxy-cpq6p
	I0108 21:40:28.029414  358628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-962345-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.239867024s)
	I0108 21:40:28.029444  358628 node.go:108] successfully drained node "m03"
	I0108 21:40:28.029882  358628 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:40:28.030221  358628 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:40:28.030685  358628 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 21:40:28.030754  358628 round_trippers.go:463] DELETE https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:40:28.030762  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:28.030773  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:28.030786  358628 round_trippers.go:473]     Content-Type: application/json
	I0108 21:40:28.030798  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:28.043328  358628 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0108 21:40:28.043369  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:28.043381  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:28.043396  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:28.043405  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:28.043417  358628 round_trippers.go:580]     Content-Length: 171
	I0108 21:40:28.043425  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:28 GMT
	I0108 21:40:28.043434  358628 round_trippers.go:580]     Audit-Id: 81663dec-6e30-4f2f-b8bc-5da199fb7a8b
	I0108 21:40:28.043443  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:28.043475  358628 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-962345-m03","kind":"nodes","uid":"d31cb22f-3104-4da9-bd90-2f7e1fa3889a"}}
	I0108 21:40:28.043518  358628 node.go:124] successfully deleted node "m03"
	I0108 21:40:28.043532  358628 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 21:40:28.043565  358628 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 21:40:28.043590  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yq2i1m.e9m7zch51kgnsapb --discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-962345-m03"
	I0108 21:40:28.105370  358628 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:40:28.279261  358628 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 21:40:28.279293  358628 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 21:40:28.341695  358628 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:40:28.341734  358628 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:40:28.341746  358628 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:40:28.486786  358628 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 21:40:29.015144  358628 command_runner.go:130] > This node has joined the cluster:
	I0108 21:40:29.015174  358628 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 21:40:29.015184  358628 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 21:40:29.015192  358628 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 21:40:29.017959  358628 command_runner.go:130] ! W0108 21:40:28.097193    2347 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 21:40:29.017992  358628 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 21:40:29.018005  358628 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 21:40:29.018017  358628 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 21:40:29.018164  358628 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 21:40:29.274372  358628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-962345 minikube.k8s.io/updated_at=2024_01_08T21_40_29_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:40:29.393519  358628 command_runner.go:130] > node/multinode-962345-m02 labeled
	I0108 21:40:29.393545  358628 command_runner.go:130] > node/multinode-962345-m03 labeled
	I0108 21:40:29.393691  358628 start.go:306] JoinCluster complete in 4.821695033s
	I0108 21:40:29.393720  358628 cni.go:84] Creating CNI manager for ""
	I0108 21:40:29.393728  358628 cni.go:136] 3 nodes found, recommending kindnet
	I0108 21:40:29.393790  358628 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:40:29.400921  358628 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:40:29.400955  358628 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:40:29.400965  358628 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:40:29.400975  358628 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:40:29.400991  358628 command_runner.go:130] > Access: 2024-01-08 21:36:17.412212418 +0000
	I0108 21:40:29.401003  358628 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0108 21:40:29.401011  358628 command_runner.go:130] > Change: 2024-01-08 21:36:15.543212418 +0000
	I0108 21:40:29.401018  358628 command_runner.go:130] >  Birth: -
	I0108 21:40:29.401099  358628 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:40:29.401115  358628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:40:29.423611  358628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:40:29.822208  358628 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:40:29.822244  358628 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:40:29.822253  358628 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:40:29.822260  358628 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:40:29.822683  358628 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:40:29.822893  358628 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:40:29.823203  358628 round_trippers.go:463] GET https://192.168.39.239:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:40:29.823217  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.823225  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.823230  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.825871  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:29.825887  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.825893  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.825899  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.825904  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.825909  358628 round_trippers.go:580]     Content-Length: 291
	I0108 21:40:29.825915  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.825919  358628 round_trippers.go:580]     Audit-Id: ac3600c9-380c-445d-9f98-34c826b9edc5
	I0108 21:40:29.825924  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.825949  358628 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9a0db73a-68c0-469b-b860-0baad5e41646","resourceVersion":"883","creationTimestamp":"2024-01-08T21:26:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:40:29.826039  358628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-962345" context rescaled to 1 replicas
	I0108 21:40:29.826067  358628 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.120 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 21:40:29.827896  358628 out.go:177] * Verifying Kubernetes components...
	I0108 21:40:29.829163  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:40:29.850187  358628 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:40:29.850520  358628 kapi.go:59] client config for multinode-962345: &rest.Config{Host:"https://192.168.39.239:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/profiles/multinode-962345/client.key", CAFile:"/home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:40:29.850853  358628 node_ready.go:35] waiting up to 6m0s for node "multinode-962345-m03" to be "Ready" ...
	I0108 21:40:29.850946  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:40:29.850956  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.850965  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.850978  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.853593  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:29.853614  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.853624  358628 round_trippers.go:580]     Audit-Id: 768773c2-f583-4c16-af52-40128e32760e
	I0108 21:40:29.853631  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.853639  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.853646  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.853654  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.853663  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.853838  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m03","uid":"821547e7-268a-49a1-8fb0-6496945957ac","resourceVersion":"1221","creationTimestamp":"2024-01-08T21:40:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:40:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 21:40:29.854156  358628 node_ready.go:49] node "multinode-962345-m03" has status "Ready":"True"
	I0108 21:40:29.854173  358628 node_ready.go:38] duration metric: took 3.298828ms waiting for node "multinode-962345-m03" to be "Ready" ...
	I0108 21:40:29.854187  358628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:40:29.854262  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods
	I0108 21:40:29.854273  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.854285  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.854298  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.869605  358628 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0108 21:40:29.869635  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.869643  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.869649  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.869655  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.869660  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.869665  358628 round_trippers.go:580]     Audit-Id: ee8efd9f-868a-410f-a1ee-86ca13b258d0
	I0108 21:40:29.869676  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.872125  358628 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1228"},"items":[{"metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"871","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82087 chars]
	I0108 21:40:29.874525  358628 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:29.874629  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-v6dmd
	I0108 21:40:29.874641  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.874652  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.874662  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.877982  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:40:29.878010  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.878021  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.878030  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.878038  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.878047  358628 round_trippers.go:580]     Audit-Id: 8d9a34a2-a114-4470-a856-d17bcb76b047
	I0108 21:40:29.878057  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.878071  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.878339  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-v6dmd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9c1edff2-3b29-4045-b7b9-935c47115d16","resourceVersion":"871","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"de110abc-4d6f-48df-8713-54b50a85c217","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de110abc-4d6f-48df-8713-54b50a85c217\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 21:40:29.878774  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:40:29.878813  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.878823  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.878830  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.881247  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:29.881261  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.881267  358628 round_trippers.go:580]     Audit-Id: dd635493-a764-4d65-bb34-7459bcc0ca4d
	I0108 21:40:29.881273  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.881277  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.881285  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.881295  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.881304  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.881556  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:40:29.881976  358628 pod_ready.go:92] pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace has status "Ready":"True"
	I0108 21:40:29.882006  358628 pod_ready.go:81] duration metric: took 7.455302ms waiting for pod "coredns-5dd5756b68-v6dmd" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:29.882023  358628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:29.882099  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-962345
	I0108 21:40:29.882110  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.882121  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.882130  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.884430  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:29.884447  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.884454  358628 round_trippers.go:580]     Audit-Id: 2d7e44e9-68ed-411f-b0f4-94761b86d524
	I0108 21:40:29.884463  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.884471  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.884477  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.884483  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.884488  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.884918  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-962345","namespace":"kube-system","uid":"44773ce7-5393-4178-a985-d8bf216f88f1","resourceVersion":"864","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.239:2379","kubernetes.io/config.hash":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.mirror":"4c6fd29cfc92d55a7ce4e2f96974ea73","kubernetes.io/config.seen":"2024-01-08T21:26:26.755438257Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 21:40:29.885263  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:40:29.885275  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.885282  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.885288  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.887477  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:29.887497  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.887507  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.887516  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.887528  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.887539  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.887548  358628 round_trippers.go:580]     Audit-Id: 303e5896-2551-4342-a4b7-34334405f461
	I0108 21:40:29.887553  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.887879  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:40:29.888182  358628 pod_ready.go:92] pod "etcd-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:40:29.888196  358628 pod_ready.go:81] duration metric: took 6.16703ms waiting for pod "etcd-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:29.888211  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:29.888271  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-962345
	I0108 21:40:29.888278  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.888285  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.888291  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.890542  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:29.890554  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.890560  358628 round_trippers.go:580]     Audit-Id: 5414fe92-948b-4386-9055-02c3906cac1b
	I0108 21:40:29.890566  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.890571  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.890576  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.890581  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.890589  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.890987  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-962345","namespace":"kube-system","uid":"bea03251-08df-4434-bc4a-36ef454e151e","resourceVersion":"862","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.239:8443","kubernetes.io/config.hash":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.mirror":"6dbed9a3f64fb2ec41dcc39fae30b654","kubernetes.io/config.seen":"2024-01-08T21:26:26.755439577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 21:40:29.891349  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:40:29.891376  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.891386  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.891395  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.893783  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:29.893799  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.893806  358628 round_trippers.go:580]     Audit-Id: c14d8447-2b23-48d4-9cf1-8522e32dc8c1
	I0108 21:40:29.893811  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.893817  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.893822  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.893828  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.893836  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.893993  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:40:29.894254  358628 pod_ready.go:92] pod "kube-apiserver-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:40:29.894268  358628 pod_ready.go:81] duration metric: took 6.048276ms waiting for pod "kube-apiserver-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:29.894276  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:29.894322  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-962345
	I0108 21:40:29.894330  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.894336  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.894342  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.896966  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:29.896980  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.896989  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.896997  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.897005  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.897014  358628 round_trippers.go:580]     Audit-Id: 143c421c-4ff6-4be7-991b-b1b1368c0cac
	I0108 21:40:29.897028  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.897041  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.897206  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-962345","namespace":"kube-system","uid":"80b86d62-83f0-4550-988f-6255409d39da","resourceVersion":"865","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.mirror":"d5f90f3600544be0f17e2e088ab14d51","kubernetes.io/config.seen":"2024-01-08T21:26:26.755427365Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 21:40:29.897571  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:40:29.897585  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:29.897596  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:29.897606  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:29.909971  358628 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0108 21:40:29.909999  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:29.910010  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:29.910020  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:29.910038  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:29.910046  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:29.910061  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:29 GMT
	I0108 21:40:29.910068  358628 round_trippers.go:580]     Audit-Id: afc248a6-c109-4cf7-b726-2f53f92c845a
	I0108 21:40:29.912310  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:40:29.912676  358628 pod_ready.go:92] pod "kube-controller-manager-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:40:29.912696  358628 pod_ready.go:81] duration metric: took 18.41259ms waiting for pod "kube-controller-manager-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:29.912712  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:30.051057  358628 request.go:629] Waited for 138.236953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:40:30.051131  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2c2t6
	I0108 21:40:30.051135  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:30.051143  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:30.051150  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:30.054625  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:40:30.054652  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:30.054663  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:30.054672  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:30 GMT
	I0108 21:40:30.054680  358628 round_trippers.go:580]     Audit-Id: 0b1ff29e-5616-45da-b7f7-d6b65267319a
	I0108 21:40:30.054692  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:30.054703  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:30.054714  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:30.055011  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2c2t6","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ea7cbf1-2b52-4c49-b07c-a5b2cd02972e","resourceVersion":"1053","creationTimestamp":"2024-01-08T21:27:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:27:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0108 21:40:30.251973  358628 request.go:629] Waited for 196.396357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:40:30.252049  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m02
	I0108 21:40:30.252054  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:30.252062  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:30.252068  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:30.256223  358628 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:40:30.256245  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:30.256255  358628 round_trippers.go:580]     Audit-Id: 6d90a175-2381-408f-8f83-e13049363454
	I0108 21:40:30.256263  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:30.256270  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:30.256277  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:30.256285  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:30.256293  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:30 GMT
	I0108 21:40:30.256534  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m02","uid":"3106d5b8-f2c3-437d-bf0a-adb8732a102b","resourceVersion":"1220","creationTimestamp":"2024-01-08T21:38:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:38:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 21:40:30.256814  358628 pod_ready.go:92] pod "kube-proxy-2c2t6" in "kube-system" namespace has status "Ready":"True"
	I0108 21:40:30.256830  358628 pod_ready.go:81] duration metric: took 344.110146ms waiting for pod "kube-proxy-2c2t6" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:30.256840  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:30.451933  358628 request.go:629] Waited for 194.994019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:40:30.452004  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmjzs
	I0108 21:40:30.452009  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:30.452017  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:30.452024  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:30.455476  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:40:30.455499  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:30.455510  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:30 GMT
	I0108 21:40:30.455519  358628 round_trippers.go:580]     Audit-Id: f0065e31-c534-4227-8afd-ed84303c5869
	I0108 21:40:30.455531  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:30.455540  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:30.455548  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:30.455556  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:30.455916  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmjzs","generateName":"kube-proxy-","namespace":"kube-system","uid":"fbfa39a4-ba62-4e31-8126-9a320311e846","resourceVersion":"754","creationTimestamp":"2024-01-08T21:26:39Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 21:40:30.651903  358628 request.go:629] Waited for 195.415894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:40:30.652016  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:40:30.652025  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:30.652041  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:30.652052  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:30.655213  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:40:30.655236  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:30.655243  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:30.655249  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:30.655254  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:30 GMT
	I0108 21:40:30.655259  358628 round_trippers.go:580]     Audit-Id: 3d5adc0b-490d-4a94-86f8-2513520bf8cb
	I0108 21:40:30.655264  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:30.655269  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:30.655566  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:40:30.655896  358628 pod_ready.go:92] pod "kube-proxy-bmjzs" in "kube-system" namespace has status "Ready":"True"
	I0108 21:40:30.655912  358628 pod_ready.go:81] duration metric: took 399.066949ms waiting for pod "kube-proxy-bmjzs" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:30.655923  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cpq6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:30.850991  358628 request.go:629] Waited for 194.981592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cpq6p
	I0108 21:40:30.851060  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cpq6p
	I0108 21:40:30.851071  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:30.851084  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:30.851097  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:30.854487  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:40:30.854504  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:30.854511  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:30.854516  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:30.854526  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:30.854531  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:30.854537  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:30 GMT
	I0108 21:40:30.854543  358628 round_trippers.go:580]     Audit-Id: 9266bdd2-bc89-47b0-8288-29e45cf0c502
	I0108 21:40:30.855069  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cpq6p","generateName":"kube-proxy-","namespace":"kube-system","uid":"52634211-9ecd-4fd9-a8ce-88f67c668e75","resourceVersion":"1241","creationTimestamp":"2024-01-08T21:28:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9488e52-a902-4155-968f-ffde12352da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:28:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9488e52-a902-4155-968f-ffde12352da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0108 21:40:31.051884  358628 request.go:629] Waited for 196.358555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:40:31.051962  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345-m03
	I0108 21:40:31.051970  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:31.051978  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:31.051988  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:31.054283  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:31.054304  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:31.054313  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:31.054322  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:31 GMT
	I0108 21:40:31.054328  358628 round_trippers.go:580]     Audit-Id: ef8e8db6-a57a-4f2e-800a-7b619b264fd2
	I0108 21:40:31.054335  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:31.054344  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:31.054352  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:31.054478  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345-m03","uid":"821547e7-268a-49a1-8fb0-6496945957ac","resourceVersion":"1221","creationTimestamp":"2024-01-08T21:40:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_40_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:40:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 21:40:31.054785  358628 pod_ready.go:92] pod "kube-proxy-cpq6p" in "kube-system" namespace has status "Ready":"True"
	I0108 21:40:31.054803  358628 pod_ready.go:81] duration metric: took 398.874004ms waiting for pod "kube-proxy-cpq6p" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:31.054813  358628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:31.251972  358628 request.go:629] Waited for 197.055573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:40:31.252061  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-962345
	I0108 21:40:31.252069  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:31.252080  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:31.252090  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:31.255136  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:40:31.255166  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:31.255177  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:31.255185  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:31.255193  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:31 GMT
	I0108 21:40:31.255201  358628 round_trippers.go:580]     Audit-Id: 83da47e5-4f47-4847-a718-737dcb71b0f8
	I0108 21:40:31.255209  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:31.255219  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:31.255391  358628 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-962345","namespace":"kube-system","uid":"3778c0a4-1528-4336-9f02-b77a2a6a1c34","resourceVersion":"873","creationTimestamp":"2024-01-08T21:26:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.mirror":"af2489a6d3116ba4abcb5fd745efd3a4","kubernetes.io/config.seen":"2024-01-08T21:26:26.755431609Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:26:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 21:40:31.451318  358628 request.go:629] Waited for 195.51501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:40:31.451426  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes/multinode-962345
	I0108 21:40:31.451435  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:31.451442  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:31.451449  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:31.454159  358628 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:40:31.454187  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:31.454196  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:31.454204  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:31 GMT
	I0108 21:40:31.454212  358628 round_trippers.go:580]     Audit-Id: cd3c8aec-a201-4f25-942f-bcdef6235382
	I0108 21:40:31.454220  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:31.454233  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:31.454243  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:31.454378  358628 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T21:26:23Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 21:40:31.454705  358628 pod_ready.go:92] pod "kube-scheduler-multinode-962345" in "kube-system" namespace has status "Ready":"True"
	I0108 21:40:31.454721  358628 pod_ready.go:81] duration metric: took 399.900785ms waiting for pod "kube-scheduler-multinode-962345" in "kube-system" namespace to be "Ready" ...
	I0108 21:40:31.454731  358628 pod_ready.go:38] duration metric: took 1.600526262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:40:31.454751  358628 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:40:31.454801  358628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:40:31.468943  358628 system_svc.go:56] duration metric: took 14.182645ms WaitForService to wait for kubelet.
	I0108 21:40:31.468971  358628 kubeadm.go:581] duration metric: took 1.642881881s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:40:31.468992  358628 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:40:31.651687  358628 request.go:629] Waited for 182.606188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.239:8443/api/v1/nodes
	I0108 21:40:31.651775  358628 round_trippers.go:463] GET https://192.168.39.239:8443/api/v1/nodes
	I0108 21:40:31.651791  358628 round_trippers.go:469] Request Headers:
	I0108 21:40:31.651803  358628 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:40:31.651815  358628 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:40:31.655597  358628 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:40:31.655616  358628 round_trippers.go:577] Response Headers:
	I0108 21:40:31.655623  358628 round_trippers.go:580]     Audit-Id: 19f01882-e812-4cbf-a9d1-69e1ecf85a68
	I0108 21:40:31.655634  358628 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:40:31.655640  358628 round_trippers.go:580]     Content-Type: application/json
	I0108 21:40:31.655645  358628 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 00e2acca-8138-48d9-be4e-ce601b6ad858
	I0108 21:40:31.655650  358628 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e3b299f-f6b5-4236-9040-a08d1fd0fdc8
	I0108 21:40:31.655655  358628 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:40:31 GMT
	I0108 21:40:31.656052  358628 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"multinode-962345","uid":"eab3d0fe-5667-4e9a-8ba4-adbbcc7efd40","resourceVersion":"898","creationTimestamp":"2024-01-08T21:26:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-962345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-962345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_26_27_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16239 chars]
	I0108 21:40:31.656650  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:40:31.656673  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:40:31.656689  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:40:31.656700  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:40:31.656709  358628 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:40:31.656714  358628 node_conditions.go:123] node cpu capacity is 2
	I0108 21:40:31.656718  358628 node_conditions.go:105] duration metric: took 187.721604ms to run NodePressure ...
	I0108 21:40:31.656733  358628 start.go:228] waiting for startup goroutines ...
	I0108 21:40:31.656752  358628 start.go:242] writing updated cluster config ...
	I0108 21:40:31.657055  358628 ssh_runner.go:195] Run: rm -f paused
	I0108 21:40:31.709293  358628 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:40:31.711430  358628 out.go:177] * Done! kubectl is now configured to use "multinode-962345" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:36:16 UTC, ends at Mon 2024-01-08 21:40:32 UTC. --
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.784160956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750032784140345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a14b1616-14c7-4c4f-afab-b65c4809024a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.784871804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=74a71ef1-100b-4c76-a3cb-f4c4e990970b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.784966851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=74a71ef1-100b-4c76-a3cb-f4c4e990970b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.785191243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12a9621debe5183b1f1b893ec252cbdde6a205534834a6efcf9130bf2d03df62,PodSandboxId:7000a70f13b503dda47006362c201a67cb7f283d5e553468e091859ef923da6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749843115534444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a10085550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055507a2da8a24fdde54c912076d685062f4143772f3093aeba149fcd9ef0e5c,PodSandboxId:adac518b6511e16d4a32d5df0fb394f0414a95fa213f0b361bfd30d3c6c048ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704749820320016549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-wmznk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84ab7957-5a65-40e2-a54b-138c6c0894f5,},Annotations:map[string]string{io.kubernetes.container.hash: 47861c95,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36edb83a91627eeacfa35df0ff136a2f59dd920e8fbc860ba82d5f3c9d3a36f,PodSandboxId:78fa0307a25422749ad359649fb38e05030208280252150a31ee54aacab77e26,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749819456603663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v6dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1edff2-3b29-4045-b7b9-935c47115d16,},Annotations:map[string]string{io.kubernetes.container.hash: badac4da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba28cb3667741f90da65bd24a5a7fa2b282ccf87f6742458a910d495ff824bc5,PodSandboxId:16ee800b9334cb717fa292e1b6878d214c8f69a2a985b4b4361e71340f8a5750,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704749814462785912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5w9nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b84fc0ee-c9b1-4e6c-b066-536f2fd56d52,},Annotations:map[string]string{io.kubernetes.container.hash: d5b65b0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aec0b76642fbe2657ff6aaca570c0685bb6fe68090cd8e5ef14993c9bfd53e5,PodSandboxId:f32c98a9b2ddd752f2484d7dea83d0d2be36cd3c623719b93cb53ab8f16337e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749811971535675,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbfa39a4-ba62-4e31-8126-9a3203
11e846,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1bef98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19693fa0328acccd6c4d9c0a58354299bc12776d86eae9e6ade1f6d3b3bb73c4,PodSandboxId:7000a70f13b503dda47006362c201a67cb7f283d5e553468e091859ef923da6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749811887824419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a1008
5550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79130409c1fb3381047374b7a470c98e3d9f03f63b0907aa3047bead8862ca8d,PodSandboxId:1c3a86b20d11a5c02d939ef1a676069ff7d8285a99bd8b52cf55c415b762b0cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749805428929733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6fd29cfc92d55a7ce4e2f96974ea73,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 22b676a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb6c2b1c480c4687bde0b2049d2f0c4f1d0359c3354dc9ee2185e918f699dfb,PodSandboxId:7b27917dd282d449b0e5419897529a0037a8f03b252f2421155781196eddced8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749805189904379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dbed9a3f64fb2ec41dcc39fae30b654,},Annotations:map[string]string{io.kubernetes.container.hash
: 671fa91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f0de888bdcb053d772c225ff3d98a937c1874e3331745775696e2fcf8be346,PodSandboxId:37c47ec15f51ab9c2ec7296806b5166601f54ae285e04e4e363a2dd0fea93412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749805052931390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f90f3600544be0f17e2e088ab14d51,},Annotations:map[string]string{io.k
ubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9967967ee1c76df6a35a73d3b28bca12e93d3fa5c1b92370a74514a3cf37f3e,PodSandboxId:b7dd1249fb50f9aac8f24021d9563bbb5c8c7e84c8472c8673ab1e34e48b9662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749804795039879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2489a6d3116ba4abcb5fd745efd3a4,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=74a71ef1-100b-4c76-a3cb-f4c4e990970b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.832005725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a1cfb06c-0795-41ee-82eb-6bafbff4beb4 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.832061376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a1cfb06c-0795-41ee-82eb-6bafbff4beb4 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.834007939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9bbf0040-689b-46a2-ae2b-55bb49730e50 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.834597606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750032834580711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9bbf0040-689b-46a2-ae2b-55bb49730e50 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.835641792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d01140fb-66dd-4bb0-871b-828b3195c21f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.835690851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d01140fb-66dd-4bb0-871b-828b3195c21f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.835899062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12a9621debe5183b1f1b893ec252cbdde6a205534834a6efcf9130bf2d03df62,PodSandboxId:7000a70f13b503dda47006362c201a67cb7f283d5e553468e091859ef923da6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749843115534444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a10085550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055507a2da8a24fdde54c912076d685062f4143772f3093aeba149fcd9ef0e5c,PodSandboxId:adac518b6511e16d4a32d5df0fb394f0414a95fa213f0b361bfd30d3c6c048ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704749820320016549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-wmznk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84ab7957-5a65-40e2-a54b-138c6c0894f5,},Annotations:map[string]string{io.kubernetes.container.hash: 47861c95,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36edb83a91627eeacfa35df0ff136a2f59dd920e8fbc860ba82d5f3c9d3a36f,PodSandboxId:78fa0307a25422749ad359649fb38e05030208280252150a31ee54aacab77e26,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749819456603663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v6dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1edff2-3b29-4045-b7b9-935c47115d16,},Annotations:map[string]string{io.kubernetes.container.hash: badac4da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba28cb3667741f90da65bd24a5a7fa2b282ccf87f6742458a910d495ff824bc5,PodSandboxId:16ee800b9334cb717fa292e1b6878d214c8f69a2a985b4b4361e71340f8a5750,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704749814462785912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5w9nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b84fc0ee-c9b1-4e6c-b066-536f2fd56d52,},Annotations:map[string]string{io.kubernetes.container.hash: d5b65b0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aec0b76642fbe2657ff6aaca570c0685bb6fe68090cd8e5ef14993c9bfd53e5,PodSandboxId:f32c98a9b2ddd752f2484d7dea83d0d2be36cd3c623719b93cb53ab8f16337e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749811971535675,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbfa39a4-ba62-4e31-8126-9a3203
11e846,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1bef98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19693fa0328acccd6c4d9c0a58354299bc12776d86eae9e6ade1f6d3b3bb73c4,PodSandboxId:7000a70f13b503dda47006362c201a67cb7f283d5e553468e091859ef923da6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749811887824419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a1008
5550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79130409c1fb3381047374b7a470c98e3d9f03f63b0907aa3047bead8862ca8d,PodSandboxId:1c3a86b20d11a5c02d939ef1a676069ff7d8285a99bd8b52cf55c415b762b0cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749805428929733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6fd29cfc92d55a7ce4e2f96974ea73,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 22b676a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb6c2b1c480c4687bde0b2049d2f0c4f1d0359c3354dc9ee2185e918f699dfb,PodSandboxId:7b27917dd282d449b0e5419897529a0037a8f03b252f2421155781196eddced8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749805189904379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dbed9a3f64fb2ec41dcc39fae30b654,},Annotations:map[string]string{io.kubernetes.container.hash
: 671fa91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f0de888bdcb053d772c225ff3d98a937c1874e3331745775696e2fcf8be346,PodSandboxId:37c47ec15f51ab9c2ec7296806b5166601f54ae285e04e4e363a2dd0fea93412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749805052931390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f90f3600544be0f17e2e088ab14d51,},Annotations:map[string]string{io.k
ubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9967967ee1c76df6a35a73d3b28bca12e93d3fa5c1b92370a74514a3cf37f3e,PodSandboxId:b7dd1249fb50f9aac8f24021d9563bbb5c8c7e84c8472c8673ab1e34e48b9662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749804795039879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2489a6d3116ba4abcb5fd745efd3a4,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d01140fb-66dd-4bb0-871b-828b3195c21f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.876058871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d10ebc33-aba2-4705-926c-698147e11fd4 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.876124639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d10ebc33-aba2-4705-926c-698147e11fd4 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.877540020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3a436540-4e63-4e0e-bf6e-7976fb9e7b3f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.878042928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750032878023303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3a436540-4e63-4e0e-bf6e-7976fb9e7b3f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.878659940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7513078a-cea5-468f-b445-65540b1c3914 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.878706487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7513078a-cea5-468f-b445-65540b1c3914 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.878932445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12a9621debe5183b1f1b893ec252cbdde6a205534834a6efcf9130bf2d03df62,PodSandboxId:7000a70f13b503dda47006362c201a67cb7f283d5e553468e091859ef923da6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749843115534444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a10085550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055507a2da8a24fdde54c912076d685062f4143772f3093aeba149fcd9ef0e5c,PodSandboxId:adac518b6511e16d4a32d5df0fb394f0414a95fa213f0b361bfd30d3c6c048ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704749820320016549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-wmznk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84ab7957-5a65-40e2-a54b-138c6c0894f5,},Annotations:map[string]string{io.kubernetes.container.hash: 47861c95,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36edb83a91627eeacfa35df0ff136a2f59dd920e8fbc860ba82d5f3c9d3a36f,PodSandboxId:78fa0307a25422749ad359649fb38e05030208280252150a31ee54aacab77e26,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749819456603663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v6dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1edff2-3b29-4045-b7b9-935c47115d16,},Annotations:map[string]string{io.kubernetes.container.hash: badac4da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba28cb3667741f90da65bd24a5a7fa2b282ccf87f6742458a910d495ff824bc5,PodSandboxId:16ee800b9334cb717fa292e1b6878d214c8f69a2a985b4b4361e71340f8a5750,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704749814462785912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5w9nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b84fc0ee-c9b1-4e6c-b066-536f2fd56d52,},Annotations:map[string]string{io.kubernetes.container.hash: d5b65b0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aec0b76642fbe2657ff6aaca570c0685bb6fe68090cd8e5ef14993c9bfd53e5,PodSandboxId:f32c98a9b2ddd752f2484d7dea83d0d2be36cd3c623719b93cb53ab8f16337e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749811971535675,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbfa39a4-ba62-4e31-8126-9a3203
11e846,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1bef98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19693fa0328acccd6c4d9c0a58354299bc12776d86eae9e6ade1f6d3b3bb73c4,PodSandboxId:7000a70f13b503dda47006362c201a67cb7f283d5e553468e091859ef923da6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749811887824419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a1008
5550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79130409c1fb3381047374b7a470c98e3d9f03f63b0907aa3047bead8862ca8d,PodSandboxId:1c3a86b20d11a5c02d939ef1a676069ff7d8285a99bd8b52cf55c415b762b0cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749805428929733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6fd29cfc92d55a7ce4e2f96974ea73,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 22b676a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb6c2b1c480c4687bde0b2049d2f0c4f1d0359c3354dc9ee2185e918f699dfb,PodSandboxId:7b27917dd282d449b0e5419897529a0037a8f03b252f2421155781196eddced8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749805189904379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dbed9a3f64fb2ec41dcc39fae30b654,},Annotations:map[string]string{io.kubernetes.container.hash
: 671fa91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f0de888bdcb053d772c225ff3d98a937c1874e3331745775696e2fcf8be346,PodSandboxId:37c47ec15f51ab9c2ec7296806b5166601f54ae285e04e4e363a2dd0fea93412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749805052931390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f90f3600544be0f17e2e088ab14d51,},Annotations:map[string]string{io.k
ubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9967967ee1c76df6a35a73d3b28bca12e93d3fa5c1b92370a74514a3cf37f3e,PodSandboxId:b7dd1249fb50f9aac8f24021d9563bbb5c8c7e84c8472c8673ab1e34e48b9662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749804795039879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2489a6d3116ba4abcb5fd745efd3a4,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7513078a-cea5-468f-b445-65540b1c3914 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.921721402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1d9d2372-3caf-4cbb-96af-1488978edff9 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.921777052Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1d9d2372-3caf-4cbb-96af-1488978edff9 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.922965121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cd3889f6-a674-4b0a-a990-8375fee3d86f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.923456064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750032923440453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cd3889f6-a674-4b0a-a990-8375fee3d86f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.924428245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=53ecf17a-ee12-40d2-a46c-a992e6487aac name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.924502922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=53ecf17a-ee12-40d2-a46c-a992e6487aac name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:40:32 multinode-962345 crio[713]: time="2024-01-08 21:40:32.924708927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12a9621debe5183b1f1b893ec252cbdde6a205534834a6efcf9130bf2d03df62,PodSandboxId:7000a70f13b503dda47006362c201a67cb7f283d5e553468e091859ef923da6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749843115534444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a10085550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055507a2da8a24fdde54c912076d685062f4143772f3093aeba149fcd9ef0e5c,PodSandboxId:adac518b6511e16d4a32d5df0fb394f0414a95fa213f0b361bfd30d3c6c048ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704749820320016549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-wmznk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84ab7957-5a65-40e2-a54b-138c6c0894f5,},Annotations:map[string]string{io.kubernetes.container.hash: 47861c95,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f36edb83a91627eeacfa35df0ff136a2f59dd920e8fbc860ba82d5f3c9d3a36f,PodSandboxId:78fa0307a25422749ad359649fb38e05030208280252150a31ee54aacab77e26,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749819456603663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-v6dmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1edff2-3b29-4045-b7b9-935c47115d16,},Annotations:map[string]string{io.kubernetes.container.hash: badac4da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba28cb3667741f90da65bd24a5a7fa2b282ccf87f6742458a910d495ff824bc5,PodSandboxId:16ee800b9334cb717fa292e1b6878d214c8f69a2a985b4b4361e71340f8a5750,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704749814462785912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5w9nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b84fc0ee-c9b1-4e6c-b066-536f2fd56d52,},Annotations:map[string]string{io.kubernetes.container.hash: d5b65b0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aec0b76642fbe2657ff6aaca570c0685bb6fe68090cd8e5ef14993c9bfd53e5,PodSandboxId:f32c98a9b2ddd752f2484d7dea83d0d2be36cd3c623719b93cb53ab8f16337e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749811971535675,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbfa39a4-ba62-4e31-8126-9a3203
11e846,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1bef98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19693fa0328acccd6c4d9c0a58354299bc12776d86eae9e6ade1f6d3b3bb73c4,PodSandboxId:7000a70f13b503dda47006362c201a67cb7f283d5e553468e091859ef923da6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749811887824419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da89492c-e129-462d-b84e-2f4a1008
5550,},Annotations:map[string]string{io.kubernetes.container.hash: f394d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79130409c1fb3381047374b7a470c98e3d9f03f63b0907aa3047bead8862ca8d,PodSandboxId:1c3a86b20d11a5c02d939ef1a676069ff7d8285a99bd8b52cf55c415b762b0cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749805428929733,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6fd29cfc92d55a7ce4e2f96974ea73,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 22b676a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb6c2b1c480c4687bde0b2049d2f0c4f1d0359c3354dc9ee2185e918f699dfb,PodSandboxId:7b27917dd282d449b0e5419897529a0037a8f03b252f2421155781196eddced8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749805189904379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dbed9a3f64fb2ec41dcc39fae30b654,},Annotations:map[string]string{io.kubernetes.container.hash
: 671fa91a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f0de888bdcb053d772c225ff3d98a937c1874e3331745775696e2fcf8be346,PodSandboxId:37c47ec15f51ab9c2ec7296806b5166601f54ae285e04e4e363a2dd0fea93412,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749805052931390,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f90f3600544be0f17e2e088ab14d51,},Annotations:map[string]string{io.k
ubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9967967ee1c76df6a35a73d3b28bca12e93d3fa5c1b92370a74514a3cf37f3e,PodSandboxId:b7dd1249fb50f9aac8f24021d9563bbb5c8c7e84c8472c8673ab1e34e48b9662,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749804795039879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-962345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2489a6d3116ba4abcb5fd745efd3a4,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=53ecf17a-ee12-40d2-a46c-a992e6487aac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	12a9621debe51       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   7000a70f13b50       storage-provisioner
	055507a2da8a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   adac518b6511e       busybox-5bc68d56bd-wmznk
	f36edb83a9162       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   78fa0307a2542       coredns-5dd5756b68-v6dmd
	ba28cb3667741       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   16ee800b9334c       kindnet-5w9nh
	0aec0b76642fb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   f32c98a9b2ddd       kube-proxy-bmjzs
	19693fa0328ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   7000a70f13b50       storage-provisioner
	79130409c1fb3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   1c3a86b20d11a       etcd-multinode-962345
	2bb6c2b1c480c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   7b27917dd282d       kube-apiserver-multinode-962345
	e5f0de888bdcb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   37c47ec15f51a       kube-controller-manager-multinode-962345
	e9967967ee1c7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   b7dd1249fb50f       kube-scheduler-multinode-962345
	
	
	==> coredns [f36edb83a91627eeacfa35df0ff136a2f59dd920e8fbc860ba82d5f3c9d3a36f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41876 - 6006 "HINFO IN 4932117952405774349.7034910164624139794. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027820765s
	
	
	==> describe nodes <==
	Name:               multinode-962345
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-962345
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-962345
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_26_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:26:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-962345
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:40:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:37:21 +0000   Mon, 08 Jan 2024 21:26:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:37:21 +0000   Mon, 08 Jan 2024 21:26:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:37:21 +0000   Mon, 08 Jan 2024 21:26:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:37:21 +0000   Mon, 08 Jan 2024 21:36:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    multinode-962345
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2493634cfb3e4223bbb0128883aa3ce6
	  System UUID:                2493634c-fb3e-4223-bbb0-128883aa3ce6
	  Boot ID:                    48487721-385c-43c5-a93b-dc1ed7d7f8df
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wmznk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-v6dmd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-962345                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-5w9nh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-962345             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-962345    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-bmjzs                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-962345             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-962345 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-962345 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-962345 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-962345 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-962345 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-962345 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-962345 event: Registered Node multinode-962345 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-962345 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m50s)  kubelet          Node multinode-962345 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m50s)  kubelet          Node multinode-962345 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m50s)  kubelet          Node multinode-962345 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m30s                  node-controller  Node multinode-962345 event: Registered Node multinode-962345 in Controller
	
	
	Name:               multinode-962345-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-962345-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-962345
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_40_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:38:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-962345-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:40:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:38:40 +0000   Mon, 08 Jan 2024 21:38:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:38:40 +0000   Mon, 08 Jan 2024 21:38:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:38:40 +0000   Mon, 08 Jan 2024 21:38:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:38:40 +0000   Mon, 08 Jan 2024 21:38:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    multinode-962345-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d203a689d1184325a914612dcf629058
	  System UUID:                d203a689-d118-4325-a914-612dcf629058
	  Boot ID:                    7b05427d-a8d8-442e-9294-a598d1ded15b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ck8xm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-mvv2x               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-2c2t6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 111s                   kube-proxy  
	  Normal   NodeReady                13m                    kubelet     Node multinode-962345-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m51s                  kubelet     Node multinode-962345-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m16s (x2 over 3m16s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientPID     114s (x7 over 13m)     kubelet     Node multinode-962345-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  114s (x7 over 13m)     kubelet     Node multinode-962345-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    114s (x7 over 13m)     kubelet     Node multinode-962345-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 113s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  113s (x2 over 113s)    kubelet     Node multinode-962345-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    113s (x2 over 113s)    kubelet     Node multinode-962345-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s (x2 over 113s)    kubelet     Node multinode-962345-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  113s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                113s                   kubelet     Node multinode-962345-m02 status is now: NodeReady
	
	
	Name:               multinode-962345-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-962345-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-962345
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_40_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:40:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-962345-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:40:28 +0000   Mon, 08 Jan 2024 21:40:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:40:28 +0000   Mon, 08 Jan 2024 21:40:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:40:28 +0000   Mon, 08 Jan 2024 21:40:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:40:28 +0000   Mon, 08 Jan 2024 21:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    multinode-962345-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c5c372781704c45b68cebf42e75c05e
	  System UUID:                3c5c3727-8170-4c45-b68c-ebf42e75c05e
	  Boot ID:                    4379234a-fe78-4a3c-9b2c-02ef7aa9babf
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-spk2c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kindnet-psmlz               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-cpq6p            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 2s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-962345-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-962345-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-962345-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-962345-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-962345-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-962345-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-962345-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-962345-m03 status is now: NodeReady
	  Normal   NodeNotReady             70s                 kubelet     Node multinode-962345-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        40s (x2 over 100s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       8s                  kubelet     Node multinode-962345-m03 status is now: NodeNotSchedulable
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-962345-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-962345-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-962345-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-962345-m03 status is now: NodeReady
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	
	
	==> dmesg <==
	[Jan 8 21:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066848] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.357666] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.473692] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152274] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.765528] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.439225] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.107493] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.155993] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.099397] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.228212] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +16.737160] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[Jan 8 21:37] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [79130409c1fb3381047374b7a470c98e3d9f03f63b0907aa3047bead8862ca8d] <==
	{"level":"info","ts":"2024-01-08T21:36:47.434538Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:36:47.434583Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:36:47.434782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 switched to configuration voters=(1001402458959805906)"}
	{"level":"info","ts":"2024-01-08T21:36:47.434847Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9f81b65ca2cd0829","local-member-id":"de5b23b13807dd2","added-peer-id":"de5b23b13807dd2","added-peer-peer-urls":["https://192.168.39.239:2380"]}
	{"level":"info","ts":"2024-01-08T21:36:47.434957Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f81b65ca2cd0829","local-member-id":"de5b23b13807dd2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:36:47.434995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:36:47.438322Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T21:36:47.444448Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"de5b23b13807dd2","initial-advertise-peer-urls":["https://192.168.39.239:2380"],"listen-peer-urls":["https://192.168.39.239:2380"],"advertise-client-urls":["https://192.168.39.239:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.239:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T21:36:47.444508Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:36:47.444554Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.239:2380"}
	{"level":"info","ts":"2024-01-08T21:36:47.44456Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.239:2380"}
	{"level":"info","ts":"2024-01-08T21:36:49.186293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-08T21:36:49.186356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:36:49.186384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 received MsgPreVoteResp from de5b23b13807dd2 at term 2"}
	{"level":"info","ts":"2024-01-08T21:36:49.186402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 became candidate at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:49.186408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 received MsgVoteResp from de5b23b13807dd2 at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:49.186417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de5b23b13807dd2 became leader at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:49.186425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: de5b23b13807dd2 elected leader de5b23b13807dd2 at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:49.190834Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"de5b23b13807dd2","local-member-attributes":"{Name:multinode-962345 ClientURLs:[https://192.168.39.239:2379]}","request-path":"/0/members/de5b23b13807dd2/attributes","cluster-id":"9f81b65ca2cd0829","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:36:49.190859Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:36:49.191091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:36:49.192829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.239:2379"}
	{"level":"info","ts":"2024-01-08T21:36:49.193566Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:36:49.193619Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:36:49.193569Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:40:33 up 4 min,  0 users,  load average: 0.33, 0.26, 0.12
	Linux multinode-962345 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [ba28cb3667741f90da65bd24a5a7fa2b282ccf87f6742458a910d495ff824bc5] <==
	I0108 21:39:46.206836       1 main.go:250] Node multinode-962345-m03 has CIDR [10.244.3.0/24] 
	I0108 21:39:56.211640       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:39:56.211689       1 main.go:227] handling current node
	I0108 21:39:56.211700       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0108 21:39:56.211706       1 main.go:250] Node multinode-962345-m02 has CIDR [10.244.1.0/24] 
	I0108 21:39:56.211820       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0108 21:39:56.211855       1 main.go:250] Node multinode-962345-m03 has CIDR [10.244.3.0/24] 
	I0108 21:40:06.226544       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:40:06.226674       1 main.go:227] handling current node
	I0108 21:40:06.226714       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0108 21:40:06.226732       1 main.go:250] Node multinode-962345-m02 has CIDR [10.244.1.0/24] 
	I0108 21:40:06.226859       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0108 21:40:06.226900       1 main.go:250] Node multinode-962345-m03 has CIDR [10.244.3.0/24] 
	I0108 21:40:16.241964       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:40:16.242154       1 main.go:227] handling current node
	I0108 21:40:16.242189       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0108 21:40:16.242207       1 main.go:250] Node multinode-962345-m02 has CIDR [10.244.1.0/24] 
	I0108 21:40:16.242614       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0108 21:40:16.242669       1 main.go:250] Node multinode-962345-m03 has CIDR [10.244.3.0/24] 
	I0108 21:40:26.253004       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 21:40:26.253406       1 main.go:227] handling current node
	I0108 21:40:26.253471       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0108 21:40:26.253502       1 main.go:250] Node multinode-962345-m02 has CIDR [10.244.1.0/24] 
	I0108 21:40:26.253755       1 main.go:223] Handling node with IPs: map[192.168.39.120:{}]
	I0108 21:40:26.253805       1 main.go:250] Node multinode-962345-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2bb6c2b1c480c4687bde0b2049d2f0c4f1d0359c3354dc9ee2185e918f699dfb] <==
	I0108 21:36:50.513631       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0108 21:36:50.586287       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0108 21:36:50.586326       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0108 21:36:50.666525       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:36:50.686561       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 21:36:50.686828       1 aggregator.go:166] initial CRD sync complete...
	I0108 21:36:50.686865       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 21:36:50.686888       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 21:36:50.686912       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:36:50.701752       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 21:36:50.701840       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 21:36:50.701901       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 21:36:50.701924       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 21:36:50.702009       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:36:50.704699       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:36:50.711442       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0108 21:36:50.714089       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0108 21:36:51.509521       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:36:53.119790       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 21:36:53.270672       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:36:53.279837       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:36:53.350671       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:36:53.357457       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:37:03.189591       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 21:37:03.329914       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e5f0de888bdcb053d772c225ff3d98a937c1874e3331745775696e2fcf8be346] <==
	I0108 21:38:40.681052       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-962345-m03"
	I0108 21:38:40.681511       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-qwxd6" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-qwxd6"
	I0108 21:38:40.681867       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-962345-m02\" does not exist"
	I0108 21:38:40.694480       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-962345-m02" podCIDRs=["10.244.1.0/24"]
	I0108 21:38:40.817830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-962345-m02"
	I0108 21:38:41.581645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="94.086µs"
	I0108 21:38:52.842872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="142.591µs"
	I0108 21:38:53.434396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="169.399µs"
	I0108 21:38:53.436107       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.153µs"
	I0108 21:39:23.862523       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-962345-m02"
	I0108 21:40:25.025742       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-ck8xm"
	I0108 21:40:25.041890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.320409ms"
	I0108 21:40:25.071014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.055797ms"
	I0108 21:40:25.071119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.585µs"
	I0108 21:40:26.729992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.509966ms"
	I0108 21:40:26.730122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.911µs"
	I0108 21:40:28.038383       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-962345-m02"
	I0108 21:40:28.312425       1 event.go:307] "Event occurred" object="multinode-962345-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-962345-m03 event: Removing Node multinode-962345-m03 from Controller"
	I0108 21:40:28.718169       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-962345-m03\" does not exist"
	I0108 21:40:28.721767       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-962345-m02"
	I0108 21:40:28.722050       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-spk2c" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-spk2c"
	I0108 21:40:28.729806       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-962345-m03" podCIDRs=["10.244.2.0/24"]
	I0108 21:40:28.865496       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-962345-m02"
	I0108 21:40:29.614420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="94.515µs"
	I0108 21:40:33.313708       1 event.go:307] "Event occurred" object="multinode-962345-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-962345-m03 event: Registered Node multinode-962345-m03 in Controller"
	
	
	==> kube-proxy [0aec0b76642fbe2657ff6aaca570c0685bb6fe68090cd8e5ef14993c9bfd53e5] <==
	I0108 21:36:52.402110       1 server_others.go:69] "Using iptables proxy"
	I0108 21:36:52.431077       1 node.go:141] Successfully retrieved node IP: 192.168.39.239
	I0108 21:36:52.526001       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:36:52.526049       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:36:52.532958       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:36:52.532990       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:36:52.533140       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:36:52.533147       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:52.534638       1 config.go:188] "Starting service config controller"
	I0108 21:36:52.534659       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:36:52.534687       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:36:52.534690       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:36:52.535143       1 config.go:315] "Starting node config controller"
	I0108 21:36:52.535149       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:36:52.635290       1 shared_informer.go:318] Caches are synced for node config
	I0108 21:36:52.658383       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:36:52.658430       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e9967967ee1c76df6a35a73d3b28bca12e93d3fa5c1b92370a74514a3cf37f3e] <==
	I0108 21:36:47.635399       1 serving.go:348] Generated self-signed cert in-memory
	W0108 21:36:50.636073       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 21:36:50.636176       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:36:50.636192       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:36:50.636201       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:36:50.677411       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 21:36:50.677506       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:50.678789       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:36:50.678922       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:36:50.679717       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 21:36:50.679830       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:36:50.780146       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:36:16 UTC, ends at Mon 2024-01-08 21:40:33 UTC. --
	Jan 08 21:36:52 multinode-962345 kubelet[919]: E0108 21:36:52.551981     919 projected.go:198] Error preparing data for projected volume kube-api-access-zddfv for pod default/busybox-5bc68d56bd-wmznk: object "default"/"kube-root-ca.crt" not registered
	Jan 08 21:36:52 multinode-962345 kubelet[919]: E0108 21:36:52.552035     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84ab7957-5a65-40e2-a54b-138c6c0894f5-kube-api-access-zddfv podName:84ab7957-5a65-40e2-a54b-138c6c0894f5 nodeName:}" failed. No retries permitted until 2024-01-08 21:36:54.55202048 +0000 UTC m=+10.912559206 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zddfv" (UniqueName: "kubernetes.io/projected/84ab7957-5a65-40e2-a54b-138c6c0894f5-kube-api-access-zddfv") pod "busybox-5bc68d56bd-wmznk" (UID: "84ab7957-5a65-40e2-a54b-138c6c0894f5") : object "default"/"kube-root-ca.crt" not registered
	Jan 08 21:36:52 multinode-962345 kubelet[919]: E0108 21:36:52.896184     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-v6dmd" podUID="9c1edff2-3b29-4045-b7b9-935c47115d16"
	Jan 08 21:36:52 multinode-962345 kubelet[919]: E0108 21:36:52.896416     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-wmznk" podUID="84ab7957-5a65-40e2-a54b-138c6c0894f5"
	Jan 08 21:36:54 multinode-962345 kubelet[919]: E0108 21:36:54.466200     919 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 08 21:36:54 multinode-962345 kubelet[919]: E0108 21:36:54.466369     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c1edff2-3b29-4045-b7b9-935c47115d16-config-volume podName:9c1edff2-3b29-4045-b7b9-935c47115d16 nodeName:}" failed. No retries permitted until 2024-01-08 21:36:58.466341071 +0000 UTC m=+14.826879794 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c1edff2-3b29-4045-b7b9-935c47115d16-config-volume") pod "coredns-5dd5756b68-v6dmd" (UID: "9c1edff2-3b29-4045-b7b9-935c47115d16") : object "kube-system"/"coredns" not registered
	Jan 08 21:36:54 multinode-962345 kubelet[919]: E0108 21:36:54.566902     919 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 08 21:36:54 multinode-962345 kubelet[919]: E0108 21:36:54.566955     919 projected.go:198] Error preparing data for projected volume kube-api-access-zddfv for pod default/busybox-5bc68d56bd-wmznk: object "default"/"kube-root-ca.crt" not registered
	Jan 08 21:36:54 multinode-962345 kubelet[919]: E0108 21:36:54.567031     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84ab7957-5a65-40e2-a54b-138c6c0894f5-kube-api-access-zddfv podName:84ab7957-5a65-40e2-a54b-138c6c0894f5 nodeName:}" failed. No retries permitted until 2024-01-08 21:36:58.567017019 +0000 UTC m=+14.927555729 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zddfv" (UniqueName: "kubernetes.io/projected/84ab7957-5a65-40e2-a54b-138c6c0894f5-kube-api-access-zddfv") pod "busybox-5bc68d56bd-wmznk" (UID: "84ab7957-5a65-40e2-a54b-138c6c0894f5") : object "default"/"kube-root-ca.crt" not registered
	Jan 08 21:36:54 multinode-962345 kubelet[919]: E0108 21:36:54.896075     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-wmznk" podUID="84ab7957-5a65-40e2-a54b-138c6c0894f5"
	Jan 08 21:36:54 multinode-962345 kubelet[919]: E0108 21:36:54.896165     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-v6dmd" podUID="9c1edff2-3b29-4045-b7b9-935c47115d16"
	Jan 08 21:36:56 multinode-962345 kubelet[919]: I0108 21:36:56.108393     919 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 21:37:23 multinode-962345 kubelet[919]: I0108 21:37:23.083435     919 scope.go:117] "RemoveContainer" containerID="19693fa0328acccd6c4d9c0a58354299bc12776d86eae9e6ade1f6d3b3bb73c4"
	Jan 08 21:37:43 multinode-962345 kubelet[919]: E0108 21:37:43.916114     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:37:43 multinode-962345 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:37:43 multinode-962345 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:37:43 multinode-962345 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:38:43 multinode-962345 kubelet[919]: E0108 21:38:43.918613     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:38:43 multinode-962345 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:38:43 multinode-962345 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:38:43 multinode-962345 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:39:43 multinode-962345 kubelet[919]: E0108 21:39:43.912991     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:39:43 multinode-962345 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:39:43 multinode-962345 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:39:43 multinode-962345 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-962345 -n multinode-962345
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-962345 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (689.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-962345 stop: exit status 82 (2m1.187650719s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-962345"  ...
	* Stopping node "multinode-962345"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-962345 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status
E0108 21:42:44.575132  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-962345 status: exit status 3 (18.644319145s)

                                                
                                                
-- stdout --
	multinode-962345
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-962345-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:42:55.843717  360948 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0108 21:42:55.843788  360948 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-962345 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-962345 -n multinode-962345
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-962345 -n multinode-962345: exit status 3 (3.169737802s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:42:59.171798  361041 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0108 21:42:59.171818  361041 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-962345" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.00s)

                                                
                                    
x
+
TestPreload (283.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-713920 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0108 21:54:44.963986  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:54:56.854591  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-713920 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m21.181505628s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-713920 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-713920 image pull gcr.io/k8s-minikube/busybox: (1.047788494s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-713920
E0108 21:55:47.620818  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-713920: exit status 82 (2m1.142328322s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-713920"  ...
	* Stopping node "test-preload-713920"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-713920 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-01-08 21:57:15.059855478 +0000 UTC m=+3310.760977833
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-713920 -n test-preload-713920
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-713920 -n test-preload-713920: exit status 3 (18.613565538s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:57:33.667833  364292 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.29:22: connect: no route to host
	E0108 21:57:33.667854  364292 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.29:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-713920" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-713920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-713920
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-713920: (1.194330504s)
--- FAIL: TestPreload (283.18s)

                                                
                                    
x
+
TestScheduledStopUnix (52.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-306666 --memory=2048 --driver=kvm2  --container-runtime=crio
E0108 21:57:44.575300  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-306666 --memory=2048 --driver=kvm2  --container-runtime=crio: (49.278434638s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306666 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-306666 -n scheduled-stop-306666
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306666 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 364798 running but should have been killed on reschedule of stop
panic.go:523: *** TestScheduledStopUnix FAILED at 2024-01-08 21:58:24.587631358 +0000 UTC m=+3380.288753707
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-306666 -n scheduled-stop-306666
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p scheduled-stop-306666 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p scheduled-stop-306666 logs -n 25: (1.120706683s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| node    | multinode-962345 node start    | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:29 UTC |
	|         | m03 --alsologtostderr          |                       |         |         |                     |                     |
	| node    | list -p multinode-962345       | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	| stop    | -p multinode-962345            | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	| start   | -p multinode-962345            | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC | 08 Jan 24 21:40 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-962345       | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:40 UTC |                     |
	| node    | multinode-962345 node delete   | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:40 UTC | 08 Jan 24 21:40 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-962345 stop          | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:40 UTC |                     |
	| start   | -p multinode-962345            | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC | 08 Jan 24 21:51 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | list -p multinode-962345       | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC |                     |
	| start   | -p multinode-962345-m02        | multinode-962345-m02  | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC |                     |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| start   | -p multinode-962345-m03        | multinode-962345-m03  | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:52 UTC |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | add -p multinode-962345        | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC |                     |
	| delete  | -p multinode-962345-m03        | multinode-962345-m03  | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC | 08 Jan 24 21:52 UTC |
	| delete  | -p multinode-962345            | multinode-962345      | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC | 08 Jan 24 21:52 UTC |
	| start   | -p test-preload-713920         | test-preload-713920   | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC | 08 Jan 24 21:55 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                       |         |         |                     |                     |
	|         | --preload=false --driver=kvm2  |                       |         |         |                     |                     |
	|         |  --container-runtime=crio      |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-713920 image pull | test-preload-713920   | jenkins | v1.32.0 | 08 Jan 24 21:55 UTC | 08 Jan 24 21:55 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-713920         | test-preload-713920   | jenkins | v1.32.0 | 08 Jan 24 21:55 UTC |                     |
	| delete  | -p test-preload-713920         | test-preload-713920   | jenkins | v1.32.0 | 08 Jan 24 21:57 UTC | 08 Jan 24 21:57 UTC |
	| start   | -p scheduled-stop-306666       | scheduled-stop-306666 | jenkins | v1.32.0 | 08 Jan 24 21:57 UTC | 08 Jan 24 21:58 UTC |
	|         | --memory=2048 --driver=kvm2    |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-306666       | scheduled-stop-306666 | jenkins | v1.32.0 | 08 Jan 24 21:58 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-306666       | scheduled-stop-306666 | jenkins | v1.32.0 | 08 Jan 24 21:58 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-306666       | scheduled-stop-306666 | jenkins | v1.32.0 | 08 Jan 24 21:58 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-306666       | scheduled-stop-306666 | jenkins | v1.32.0 | 08 Jan 24 21:58 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-306666       | scheduled-stop-306666 | jenkins | v1.32.0 | 08 Jan 24 21:58 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-306666       | scheduled-stop-306666 | jenkins | v1.32.0 | 08 Jan 24 21:58 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:57:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:57:34.929381  364439 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:57:34.929505  364439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:57:34.929508  364439 out.go:309] Setting ErrFile to fd 2...
	I0108 21:57:34.929512  364439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:57:34.929733  364439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:57:34.930398  364439 out.go:303] Setting JSON to false
	I0108 21:57:34.931422  364439 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9581,"bootTime":1704741474,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:57:34.931483  364439 start.go:138] virtualization: kvm guest
	I0108 21:57:34.934172  364439 out.go:177] * [scheduled-stop-306666] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:57:34.936001  364439 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:57:34.936028  364439 notify.go:220] Checking for updates...
	I0108 21:57:34.937965  364439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:57:34.939520  364439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:57:34.941137  364439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:57:34.942650  364439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:57:34.944082  364439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:57:34.945823  364439 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:57:34.985106  364439 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:57:34.986604  364439 start.go:298] selected driver: kvm2
	I0108 21:57:34.986614  364439 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:57:34.986629  364439 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:57:34.987565  364439 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:57:34.987642  364439 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:57:35.004364  364439 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:57:35.004473  364439 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:57:35.004730  364439 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 21:57:35.004809  364439 cni.go:84] Creating CNI manager for ""
	I0108 21:57:35.004818  364439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:57:35.004831  364439 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:57:35.004836  364439 start_flags.go:321] config:
	{Name:scheduled-stop-306666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-306666 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:57:35.004989  364439 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:57:35.007461  364439 out.go:177] * Starting control plane node scheduled-stop-306666 in cluster scheduled-stop-306666
	I0108 21:57:35.009135  364439 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:57:35.009206  364439 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:57:35.009233  364439 cache.go:56] Caching tarball of preloaded images
	I0108 21:57:35.009399  364439 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:57:35.009414  364439 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:57:35.009940  364439 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/config.json ...
	I0108 21:57:35.009972  364439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/config.json: {Name:mk70c2f56ff92ad136fd8d4bf048623b1ee6293b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:57:35.010260  364439 start.go:365] acquiring machines lock for scheduled-stop-306666: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:57:35.010310  364439 start.go:369] acquired machines lock for "scheduled-stop-306666" in 32.21µs
	I0108 21:57:35.010338  364439 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-306666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-306666 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:57:35.010447  364439 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 21:57:35.013646  364439 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0108 21:57:35.013871  364439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:57:35.013914  364439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:57:35.029454  364439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I0108 21:57:35.030057  364439 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:57:35.030708  364439 main.go:141] libmachine: Using API Version  1
	I0108 21:57:35.030727  364439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:57:35.031175  364439 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:57:35.031434  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetMachineName
	I0108 21:57:35.031614  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:57:35.031787  364439 start.go:159] libmachine.API.Create for "scheduled-stop-306666" (driver="kvm2")
	I0108 21:57:35.031824  364439 client.go:168] LocalClient.Create starting
	I0108 21:57:35.031869  364439 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 21:57:35.031914  364439 main.go:141] libmachine: Decoding PEM data...
	I0108 21:57:35.031935  364439 main.go:141] libmachine: Parsing certificate...
	I0108 21:57:35.032001  364439 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 21:57:35.032017  364439 main.go:141] libmachine: Decoding PEM data...
	I0108 21:57:35.032025  364439 main.go:141] libmachine: Parsing certificate...
	I0108 21:57:35.032039  364439 main.go:141] libmachine: Running pre-create checks...
	I0108 21:57:35.032047  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .PreCreateCheck
	I0108 21:57:35.032419  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetConfigRaw
	I0108 21:57:35.032855  364439 main.go:141] libmachine: Creating machine...
	I0108 21:57:35.032864  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .Create
	I0108 21:57:35.032992  364439 main.go:141] libmachine: (scheduled-stop-306666) Creating KVM machine...
	I0108 21:57:35.034254  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found existing default KVM network
	I0108 21:57:35.035063  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:35.034887  364462 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a20}
	I0108 21:57:35.040851  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | trying to create private KVM network mk-scheduled-stop-306666 192.168.39.0/24...
	I0108 21:57:35.127274  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | private KVM network mk-scheduled-stop-306666 192.168.39.0/24 created
	I0108 21:57:35.127305  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:35.127204  364462 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:57:35.127321  364439 main.go:141] libmachine: (scheduled-stop-306666) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666 ...
	I0108 21:57:35.127342  364439 main.go:141] libmachine: (scheduled-stop-306666) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 21:57:35.127419  364439 main.go:141] libmachine: (scheduled-stop-306666) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 21:57:35.384848  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:35.384723  364462 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa...
	I0108 21:57:35.543823  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:35.543647  364462 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/scheduled-stop-306666.rawdisk...
	I0108 21:57:35.543849  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Writing magic tar header
	I0108 21:57:35.543883  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Writing SSH key tar header
	I0108 21:57:35.543897  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:35.543820  364462 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666 ...
	I0108 21:57:35.544073  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666
	I0108 21:57:35.544111  364439 main.go:141] libmachine: (scheduled-stop-306666) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666 (perms=drwx------)
	I0108 21:57:35.544122  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 21:57:35.544148  364439 main.go:141] libmachine: (scheduled-stop-306666) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 21:57:35.544173  364439 main.go:141] libmachine: (scheduled-stop-306666) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 21:57:35.544185  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:57:35.544199  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 21:57:35.544209  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 21:57:35.544224  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Checking permissions on dir: /home/jenkins
	I0108 21:57:35.544233  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Checking permissions on dir: /home
	I0108 21:57:35.544241  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Skipping /home - not owner
	I0108 21:57:35.544300  364439 main.go:141] libmachine: (scheduled-stop-306666) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 21:57:35.544324  364439 main.go:141] libmachine: (scheduled-stop-306666) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 21:57:35.544338  364439 main.go:141] libmachine: (scheduled-stop-306666) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 21:57:35.544344  364439 main.go:141] libmachine: (scheduled-stop-306666) Creating domain...
	I0108 21:57:35.545309  364439 main.go:141] libmachine: (scheduled-stop-306666) define libvirt domain using xml: 
	I0108 21:57:35.545330  364439 main.go:141] libmachine: (scheduled-stop-306666) <domain type='kvm'>
	I0108 21:57:35.545340  364439 main.go:141] libmachine: (scheduled-stop-306666)   <name>scheduled-stop-306666</name>
	I0108 21:57:35.545348  364439 main.go:141] libmachine: (scheduled-stop-306666)   <memory unit='MiB'>2048</memory>
	I0108 21:57:35.545355  364439 main.go:141] libmachine: (scheduled-stop-306666)   <vcpu>2</vcpu>
	I0108 21:57:35.545367  364439 main.go:141] libmachine: (scheduled-stop-306666)   <features>
	I0108 21:57:35.545378  364439 main.go:141] libmachine: (scheduled-stop-306666)     <acpi/>
	I0108 21:57:35.545385  364439 main.go:141] libmachine: (scheduled-stop-306666)     <apic/>
	I0108 21:57:35.545392  364439 main.go:141] libmachine: (scheduled-stop-306666)     <pae/>
	I0108 21:57:35.545399  364439 main.go:141] libmachine: (scheduled-stop-306666)     
	I0108 21:57:35.545407  364439 main.go:141] libmachine: (scheduled-stop-306666)   </features>
	I0108 21:57:35.545415  364439 main.go:141] libmachine: (scheduled-stop-306666)   <cpu mode='host-passthrough'>
	I0108 21:57:35.545424  364439 main.go:141] libmachine: (scheduled-stop-306666)   
	I0108 21:57:35.545432  364439 main.go:141] libmachine: (scheduled-stop-306666)   </cpu>
	I0108 21:57:35.545439  364439 main.go:141] libmachine: (scheduled-stop-306666)   <os>
	I0108 21:57:35.545461  364439 main.go:141] libmachine: (scheduled-stop-306666)     <type>hvm</type>
	I0108 21:57:35.545471  364439 main.go:141] libmachine: (scheduled-stop-306666)     <boot dev='cdrom'/>
	I0108 21:57:35.545477  364439 main.go:141] libmachine: (scheduled-stop-306666)     <boot dev='hd'/>
	I0108 21:57:35.545485  364439 main.go:141] libmachine: (scheduled-stop-306666)     <bootmenu enable='no'/>
	I0108 21:57:35.545492  364439 main.go:141] libmachine: (scheduled-stop-306666)   </os>
	I0108 21:57:35.545501  364439 main.go:141] libmachine: (scheduled-stop-306666)   <devices>
	I0108 21:57:35.545508  364439 main.go:141] libmachine: (scheduled-stop-306666)     <disk type='file' device='cdrom'>
	I0108 21:57:35.545520  364439 main.go:141] libmachine: (scheduled-stop-306666)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/boot2docker.iso'/>
	I0108 21:57:35.545533  364439 main.go:141] libmachine: (scheduled-stop-306666)       <target dev='hdc' bus='scsi'/>
	I0108 21:57:35.545540  364439 main.go:141] libmachine: (scheduled-stop-306666)       <readonly/>
	I0108 21:57:35.545547  364439 main.go:141] libmachine: (scheduled-stop-306666)     </disk>
	I0108 21:57:35.545562  364439 main.go:141] libmachine: (scheduled-stop-306666)     <disk type='file' device='disk'>
	I0108 21:57:35.545571  364439 main.go:141] libmachine: (scheduled-stop-306666)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 21:57:35.545590  364439 main.go:141] libmachine: (scheduled-stop-306666)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/scheduled-stop-306666.rawdisk'/>
	I0108 21:57:35.545596  364439 main.go:141] libmachine: (scheduled-stop-306666)       <target dev='hda' bus='virtio'/>
	I0108 21:57:35.545604  364439 main.go:141] libmachine: (scheduled-stop-306666)     </disk>
	I0108 21:57:35.545617  364439 main.go:141] libmachine: (scheduled-stop-306666)     <interface type='network'>
	I0108 21:57:35.545628  364439 main.go:141] libmachine: (scheduled-stop-306666)       <source network='mk-scheduled-stop-306666'/>
	I0108 21:57:35.545653  364439 main.go:141] libmachine: (scheduled-stop-306666)       <model type='virtio'/>
	I0108 21:57:35.545666  364439 main.go:141] libmachine: (scheduled-stop-306666)     </interface>
	I0108 21:57:35.545674  364439 main.go:141] libmachine: (scheduled-stop-306666)     <interface type='network'>
	I0108 21:57:35.545692  364439 main.go:141] libmachine: (scheduled-stop-306666)       <source network='default'/>
	I0108 21:57:35.545700  364439 main.go:141] libmachine: (scheduled-stop-306666)       <model type='virtio'/>
	I0108 21:57:35.545706  364439 main.go:141] libmachine: (scheduled-stop-306666)     </interface>
	I0108 21:57:35.545711  364439 main.go:141] libmachine: (scheduled-stop-306666)     <serial type='pty'>
	I0108 21:57:35.545717  364439 main.go:141] libmachine: (scheduled-stop-306666)       <target port='0'/>
	I0108 21:57:35.545722  364439 main.go:141] libmachine: (scheduled-stop-306666)     </serial>
	I0108 21:57:35.545728  364439 main.go:141] libmachine: (scheduled-stop-306666)     <console type='pty'>
	I0108 21:57:35.545733  364439 main.go:141] libmachine: (scheduled-stop-306666)       <target type='serial' port='0'/>
	I0108 21:57:35.545744  364439 main.go:141] libmachine: (scheduled-stop-306666)     </console>
	I0108 21:57:35.545749  364439 main.go:141] libmachine: (scheduled-stop-306666)     <rng model='virtio'>
	I0108 21:57:35.545755  364439 main.go:141] libmachine: (scheduled-stop-306666)       <backend model='random'>/dev/random</backend>
	I0108 21:57:35.545759  364439 main.go:141] libmachine: (scheduled-stop-306666)     </rng>
	I0108 21:57:35.545765  364439 main.go:141] libmachine: (scheduled-stop-306666)     
	I0108 21:57:35.545769  364439 main.go:141] libmachine: (scheduled-stop-306666)     
	I0108 21:57:35.545774  364439 main.go:141] libmachine: (scheduled-stop-306666)   </devices>
	I0108 21:57:35.545778  364439 main.go:141] libmachine: (scheduled-stop-306666) </domain>
	I0108 21:57:35.545785  364439 main.go:141] libmachine: (scheduled-stop-306666) 
	I0108 21:57:35.551753  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:65:51:e3 in network default
	I0108 21:57:35.552297  364439 main.go:141] libmachine: (scheduled-stop-306666) Ensuring networks are active...
	I0108 21:57:35.552314  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:35.553078  364439 main.go:141] libmachine: (scheduled-stop-306666) Ensuring network default is active
	I0108 21:57:35.553526  364439 main.go:141] libmachine: (scheduled-stop-306666) Ensuring network mk-scheduled-stop-306666 is active
	I0108 21:57:35.553976  364439 main.go:141] libmachine: (scheduled-stop-306666) Getting domain xml...
	I0108 21:57:35.554724  364439 main.go:141] libmachine: (scheduled-stop-306666) Creating domain...
	I0108 21:57:36.873807  364439 main.go:141] libmachine: (scheduled-stop-306666) Waiting to get IP...
	I0108 21:57:36.876347  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:36.876871  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:36.876891  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:36.876832  364462 retry.go:31] will retry after 255.25626ms: waiting for machine to come up
	I0108 21:57:37.133496  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:37.133994  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:37.134011  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:37.133943  364462 retry.go:31] will retry after 265.310466ms: waiting for machine to come up
	I0108 21:57:37.400639  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:37.401103  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:37.401120  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:37.401061  364462 retry.go:31] will retry after 377.201104ms: waiting for machine to come up
	I0108 21:57:37.779567  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:37.779989  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:37.780007  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:37.779948  364462 retry.go:31] will retry after 526.303321ms: waiting for machine to come up
	I0108 21:57:38.307815  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:38.308361  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:38.308383  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:38.308258  364462 retry.go:31] will retry after 648.293909ms: waiting for machine to come up
	I0108 21:57:38.957984  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:38.958455  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:38.958481  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:38.958421  364462 retry.go:31] will retry after 842.700783ms: waiting for machine to come up
	I0108 21:57:39.802474  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:39.802937  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:39.802962  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:39.802885  364462 retry.go:31] will retry after 910.713859ms: waiting for machine to come up
	I0108 21:57:40.715820  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:40.716183  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:40.716207  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:40.716128  364462 retry.go:31] will retry after 1.36480574s: waiting for machine to come up
	I0108 21:57:42.082849  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:42.083393  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:42.083434  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:42.083308  364462 retry.go:31] will retry after 1.142074155s: waiting for machine to come up
	I0108 21:57:43.227827  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:43.228344  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:43.228365  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:43.228282  364462 retry.go:31] will retry after 1.726434508s: waiting for machine to come up
	I0108 21:57:44.957765  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:44.958259  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:44.958282  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:44.958192  364462 retry.go:31] will retry after 2.608851133s: waiting for machine to come up
	I0108 21:57:47.570791  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:47.571300  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:47.571327  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:47.571231  364462 retry.go:31] will retry after 2.652120604s: waiting for machine to come up
	I0108 21:57:50.224646  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:50.225049  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:50.225061  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:50.225035  364462 retry.go:31] will retry after 3.87539099s: waiting for machine to come up
	I0108 21:57:54.105065  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:54.105459  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find current IP address of domain scheduled-stop-306666 in network mk-scheduled-stop-306666
	I0108 21:57:54.105484  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | I0108 21:57:54.105406  364462 retry.go:31] will retry after 3.901180607s: waiting for machine to come up
	I0108 21:57:58.007944  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.008441  364439 main.go:141] libmachine: (scheduled-stop-306666) Found IP for machine: 192.168.39.45
	I0108 21:57:58.008464  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has current primary IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.008470  364439 main.go:141] libmachine: (scheduled-stop-306666) Reserving static IP address...
	I0108 21:57:58.008869  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | unable to find host DHCP lease matching {name: "scheduled-stop-306666", mac: "52:54:00:00:c8:28", ip: "192.168.39.45"} in network mk-scheduled-stop-306666
	I0108 21:57:58.101933  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Getting to WaitForSSH function...
	I0108 21:57:58.101964  364439 main.go:141] libmachine: (scheduled-stop-306666) Reserved static IP address: 192.168.39.45
	I0108 21:57:58.101979  364439 main.go:141] libmachine: (scheduled-stop-306666) Waiting for SSH to be available...
	I0108 21:57:58.104481  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.104832  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:58.104872  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.104991  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Using SSH client type: external
	I0108 21:57:58.105010  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa (-rw-------)
	I0108 21:57:58.105043  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:57:58.105062  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | About to run SSH command:
	I0108 21:57:58.105077  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | exit 0
	I0108 21:57:58.203670  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | SSH cmd err, output: <nil>: 
	I0108 21:57:58.204009  364439 main.go:141] libmachine: (scheduled-stop-306666) KVM machine creation complete!
	I0108 21:57:58.204335  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetConfigRaw
	I0108 21:57:58.205008  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:57:58.205260  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:57:58.205508  364439 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 21:57:58.205523  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetState
	I0108 21:57:58.206849  364439 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 21:57:58.206861  364439 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 21:57:58.206877  364439 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 21:57:58.206887  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:58.209414  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.209942  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:58.209966  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.210122  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:58.210335  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:58.210512  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:58.210667  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:58.210812  364439 main.go:141] libmachine: Using SSH client type: native
	I0108 21:57:58.211170  364439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0108 21:57:58.211180  364439 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 21:57:58.339616  364439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:57:58.339635  364439 main.go:141] libmachine: Detecting the provisioner...
	I0108 21:57:58.339645  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:58.343023  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.343556  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:58.343586  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.343770  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:58.344087  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:58.344288  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:58.344470  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:58.344652  364439 main.go:141] libmachine: Using SSH client type: native
	I0108 21:57:58.345001  364439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0108 21:57:58.345008  364439 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 21:57:58.476785  364439 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 21:57:58.476900  364439 main.go:141] libmachine: found compatible host: buildroot
	I0108 21:57:58.476908  364439 main.go:141] libmachine: Provisioning with buildroot...
	I0108 21:57:58.476916  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetMachineName
	I0108 21:57:58.477212  364439 buildroot.go:166] provisioning hostname "scheduled-stop-306666"
	I0108 21:57:58.477239  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetMachineName
	I0108 21:57:58.477480  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:58.479959  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.480263  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:58.480286  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.480456  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:58.480643  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:58.480790  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:58.480902  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:58.481130  364439 main.go:141] libmachine: Using SSH client type: native
	I0108 21:57:58.481487  364439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0108 21:57:58.481495  364439 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-306666 && echo "scheduled-stop-306666" | sudo tee /etc/hostname
	I0108 21:57:58.627338  364439 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-306666
	
	I0108 21:57:58.627384  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:58.630825  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.631205  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:58.631263  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.631464  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:58.631845  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:58.632121  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:58.632267  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:58.632429  364439 main.go:141] libmachine: Using SSH client type: native
	I0108 21:57:58.632781  364439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0108 21:57:58.632793  364439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-306666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-306666/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-306666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:57:58.769375  364439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:57:58.769430  364439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 21:57:58.769471  364439 buildroot.go:174] setting up certificates
	I0108 21:57:58.769499  364439 provision.go:83] configureAuth start
	I0108 21:57:58.769512  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetMachineName
	I0108 21:57:58.769951  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetIP
	I0108 21:57:58.773136  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.773427  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:58.773447  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.773634  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:58.776307  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.776616  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:58.776634  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:58.776819  364439 provision.go:138] copyHostCerts
	I0108 21:57:58.776884  364439 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 21:57:58.776890  364439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 21:57:58.776956  364439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 21:57:58.777056  364439 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 21:57:58.777059  364439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 21:57:58.777082  364439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 21:57:58.777154  364439 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 21:57:58.777159  364439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 21:57:58.777181  364439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 21:57:58.777247  364439 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-306666 san=[192.168.39.45 192.168.39.45 localhost 127.0.0.1 minikube scheduled-stop-306666]
	I0108 21:57:59.089667  364439 provision.go:172] copyRemoteCerts
	I0108 21:57:59.089732  364439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:57:59.089761  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:59.092736  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.093094  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.093122  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.093324  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:59.093584  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:59.093740  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:59.093910  364439 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa Username:docker}
	I0108 21:57:59.189783  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:57:59.216945  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:57:59.243256  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:57:59.269780  364439 provision.go:86] duration metric: configureAuth took 500.264449ms
	I0108 21:57:59.269802  364439 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:57:59.270008  364439 config.go:182] Loaded profile config "scheduled-stop-306666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:57:59.270093  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:59.273305  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.273679  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.273707  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.273908  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:59.274208  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:59.274390  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:59.274543  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:59.274731  364439 main.go:141] libmachine: Using SSH client type: native
	I0108 21:57:59.275179  364439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0108 21:57:59.275193  364439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:57:59.635042  364439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:57:59.635056  364439 main.go:141] libmachine: Checking connection to Docker...
	I0108 21:57:59.635065  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetURL
	I0108 21:57:59.636957  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Using libvirt version 6000000
	I0108 21:57:59.640072  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.640478  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.640523  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.640724  364439 main.go:141] libmachine: Docker is up and running!
	I0108 21:57:59.640736  364439 main.go:141] libmachine: Reticulating splines...
	I0108 21:57:59.640741  364439 client.go:171] LocalClient.Create took 24.60891145s
	I0108 21:57:59.640761  364439 start.go:167] duration metric: libmachine.API.Create for "scheduled-stop-306666" took 24.608975428s
	I0108 21:57:59.640768  364439 start.go:300] post-start starting for "scheduled-stop-306666" (driver="kvm2")
	I0108 21:57:59.640778  364439 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:57:59.640792  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:57:59.641101  364439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:57:59.641129  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:59.643756  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.644191  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.644217  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.644450  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:59.644706  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:59.644892  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:59.645064  364439 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa Username:docker}
	I0108 21:57:59.739190  364439 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:57:59.743676  364439 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:57:59.743731  364439 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 21:57:59.743818  364439 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 21:57:59.743884  364439 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 21:57:59.743970  364439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:57:59.753628  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:57:59.780117  364439 start.go:303] post-start completed in 139.334573ms
	I0108 21:57:59.780186  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetConfigRaw
	I0108 21:57:59.780817  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetIP
	I0108 21:57:59.783588  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.783879  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.783905  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.784134  364439 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/config.json ...
	I0108 21:57:59.784318  364439 start.go:128] duration metric: createHost completed in 24.773860565s
	I0108 21:57:59.784337  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:59.786571  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.786923  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.786951  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.787103  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:59.787302  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:59.787473  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:59.787636  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:59.787795  364439 main.go:141] libmachine: Using SSH client type: native
	I0108 21:57:59.788136  364439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I0108 21:57:59.788142  364439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:57:59.916369  364439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704751079.895800192
	
	I0108 21:57:59.916397  364439 fix.go:206] guest clock: 1704751079.895800192
	I0108 21:57:59.916405  364439 fix.go:219] Guest: 2024-01-08 21:57:59.895800192 +0000 UTC Remote: 2024-01-08 21:57:59.784324666 +0000 UTC m=+24.912715661 (delta=111.475526ms)
	I0108 21:57:59.916424  364439 fix.go:190] guest clock delta is within tolerance: 111.475526ms
	I0108 21:57:59.916428  364439 start.go:83] releasing machines lock for "scheduled-stop-306666", held for 24.906111384s
	I0108 21:57:59.916451  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:57:59.916772  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetIP
	I0108 21:57:59.920076  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.920519  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.920540  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.920760  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:57:59.921477  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:57:59.921669  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:57:59.921802  364439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:57:59.921835  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:59.921947  364439 ssh_runner.go:195] Run: cat /version.json
	I0108 21:57:59.921962  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:57:59.924858  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.925104  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.925249  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.925274  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.925497  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:59.925553  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:57:59.925576  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:57:59.925746  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:59.925827  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:57:59.925940  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:59.926008  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:57:59.926083  364439 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa Username:docker}
	I0108 21:57:59.926185  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:57:59.926323  364439 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa Username:docker}
	I0108 21:58:00.038671  364439 ssh_runner.go:195] Run: systemctl --version
	I0108 21:58:00.044731  364439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:58:00.210649  364439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:58:00.218495  364439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:58:00.218571  364439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:58:00.236931  364439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:58:00.236952  364439 start.go:475] detecting cgroup driver to use...
	I0108 21:58:00.237033  364439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:58:00.253481  364439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:58:00.265671  364439 docker.go:203] disabling cri-docker service (if available) ...
	I0108 21:58:00.265741  364439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:58:00.280358  364439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:58:00.296316  364439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:58:00.404072  364439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:58:00.527184  364439 docker.go:219] disabling docker service ...
	I0108 21:58:00.527247  364439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:58:00.541143  364439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:58:00.554139  364439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:58:00.673396  364439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:58:00.808256  364439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:58:00.823994  364439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:58:00.842489  364439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:58:00.842603  364439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:58:00.854015  364439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:58:00.854090  364439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:58:00.865956  364439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:58:00.878409  364439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:58:00.889805  364439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:58:00.900931  364439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:58:00.911693  364439 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:58:00.911764  364439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 21:58:00.927485  364439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:58:00.937529  364439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:58:01.067864  364439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:58:01.260726  364439 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:58:01.260809  364439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:58:01.266252  364439 start.go:543] Will wait 60s for crictl version
	I0108 21:58:01.266318  364439 ssh_runner.go:195] Run: which crictl
	I0108 21:58:01.270524  364439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:58:01.315902  364439 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:58:01.315986  364439 ssh_runner.go:195] Run: crio --version
	I0108 21:58:01.373015  364439 ssh_runner.go:195] Run: crio --version
	I0108 21:58:01.429088  364439 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:58:01.430963  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetIP
	I0108 21:58:01.433546  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:58:01.433976  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:58:01.434001  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:58:01.434207  364439 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:58:01.438361  364439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:58:01.452580  364439 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:58:01.452656  364439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:58:01.496122  364439 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 21:58:01.496178  364439 ssh_runner.go:195] Run: which lz4
	I0108 21:58:01.500160  364439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:58:01.505104  364439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:58:01.505136  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 21:58:03.519629  364439 crio.go:444] Took 2.019500 seconds to copy over tarball
	I0108 21:58:03.519713  364439 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:58:06.707682  364439 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.187932703s)
	I0108 21:58:06.707705  364439 crio.go:451] Took 3.188056 seconds to extract the tarball
	I0108 21:58:06.707714  364439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:58:06.752079  364439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:58:06.835726  364439 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:58:06.835742  364439 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:58:06.835817  364439 ssh_runner.go:195] Run: crio config
	I0108 21:58:06.907321  364439 cni.go:84] Creating CNI manager for ""
	I0108 21:58:06.907335  364439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:58:06.907355  364439 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:58:06.907400  364439 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-306666 NodeName:scheduled-stop-306666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:58:06.907585  364439 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-306666"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:58:06.907666  364439 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=scheduled-stop-306666 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-306666 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:58:06.907717  364439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:58:06.918932  364439 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:58:06.919011  364439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:58:06.929593  364439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0108 21:58:06.948581  364439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:58:06.969737  364439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0108 21:58:06.988121  364439 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I0108 21:58:06.992071  364439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:58:07.006325  364439 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666 for IP: 192.168.39.45
	I0108 21:58:07.006357  364439 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:07.006571  364439 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 21:58:07.006611  364439 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 21:58:07.006660  364439 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/client.key
	I0108 21:58:07.006668  364439 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/client.crt with IP's: []
	I0108 21:58:07.423124  364439 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/client.crt ...
	I0108 21:58:07.423141  364439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/client.crt: {Name:mk2c82fd8a230c30f4caec875872ecf08f00e7bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:07.423328  364439 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/client.key ...
	I0108 21:58:07.423336  364439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/client.key: {Name:mk90c8aeef615773846795996382aca16558ebaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:07.423454  364439 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.key.7aba1c1f
	I0108 21:58:07.423473  364439 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.crt.7aba1c1f with IP's: [192.168.39.45 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:58:07.845129  364439 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.crt.7aba1c1f ...
	I0108 21:58:07.845158  364439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.crt.7aba1c1f: {Name:mk783216753399699679a9d3127deb4a2a820821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:07.845373  364439 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.key.7aba1c1f ...
	I0108 21:58:07.845390  364439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.key.7aba1c1f: {Name:mkd491489fae4eaf1f94b99b0581adb21b957728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:07.845469  364439 certs.go:337] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.crt.7aba1c1f -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.crt
	I0108 21:58:07.845558  364439 certs.go:341] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.key.7aba1c1f -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.key
	I0108 21:58:07.845607  364439 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/proxy-client.key
	I0108 21:58:07.845617  364439 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/proxy-client.crt with IP's: []
	I0108 21:58:07.930887  364439 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/proxy-client.crt ...
	I0108 21:58:07.930905  364439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/proxy-client.crt: {Name:mkef04c73174301d215aeb2bf51f771afcf959f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:07.931135  364439 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/proxy-client.key ...
	I0108 21:58:07.931144  364439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/proxy-client.key: {Name:mkbac99d2979ba7c82c6d9b67e9b9dffc3d7518a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:07.931421  364439 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 21:58:07.931480  364439 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 21:58:07.931490  364439 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:58:07.931517  364439 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:58:07.931553  364439 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:58:07.931573  364439 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 21:58:07.931615  364439 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 21:58:07.932446  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:58:07.962209  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:58:07.988590  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:58:08.014563  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/scheduled-stop-306666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:58:08.041924  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:58:08.068824  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:58:08.096926  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:58:08.122693  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:58:08.149985  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:58:08.177511  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 21:58:08.204109  364439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 21:58:08.229840  364439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:58:08.249881  364439 ssh_runner.go:195] Run: openssl version
	I0108 21:58:08.256311  364439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 21:58:08.270034  364439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 21:58:08.276487  364439 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 21:58:08.276566  364439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 21:58:08.282599  364439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 21:58:08.295935  364439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 21:58:08.308372  364439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 21:58:08.314750  364439 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 21:58:08.314847  364439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 21:58:08.322012  364439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:58:08.334338  364439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:58:08.347345  364439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:58:08.352415  364439 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:58:08.352480  364439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:58:08.359064  364439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:58:08.372798  364439 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:58:08.377526  364439 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:58:08.377589  364439 kubeadm.go:404] StartCluster: {Name:scheduled-stop-306666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:scheduled-stop-306666 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:58:08.377669  364439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:58:08.377721  364439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:58:08.420875  364439 cri.go:89] found id: ""
	I0108 21:58:08.420958  364439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:58:08.433951  364439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:58:08.445009  364439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:58:08.455857  364439 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:58:08.455899  364439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 21:58:08.875880  364439 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:58:21.769185  364439 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 21:58:21.769229  364439 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:58:21.769338  364439 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:58:21.769446  364439 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:58:21.769565  364439 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:58:21.769648  364439 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:58:21.772313  364439 out.go:204]   - Generating certificates and keys ...
	I0108 21:58:21.772469  364439 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:58:21.772562  364439 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:58:21.772657  364439 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:58:21.772743  364439 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:58:21.772830  364439 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:58:21.772898  364439 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:58:21.772972  364439 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:58:21.773132  364439 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-306666] and IPs [192.168.39.45 127.0.0.1 ::1]
	I0108 21:58:21.773195  364439 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:58:21.773321  364439 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-306666] and IPs [192.168.39.45 127.0.0.1 ::1]
	I0108 21:58:21.773400  364439 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:58:21.773512  364439 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:58:21.773571  364439 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:58:21.773654  364439 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:58:21.773714  364439 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:58:21.773780  364439 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:58:21.773872  364439 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:58:21.773949  364439 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:58:21.774074  364439 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:58:21.774148  364439 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:58:21.775856  364439 out.go:204]   - Booting up control plane ...
	I0108 21:58:21.775948  364439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:58:21.776023  364439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:58:21.776084  364439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:58:21.776252  364439 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:58:21.776361  364439 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:58:21.776411  364439 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:58:21.776598  364439 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:58:21.776663  364439 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504360 seconds
	I0108 21:58:21.776770  364439 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:58:21.776895  364439 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:58:21.776965  364439 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:58:21.777229  364439 kubeadm.go:322] [mark-control-plane] Marking the node scheduled-stop-306666 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:58:21.777312  364439 kubeadm.go:322] [bootstrap-token] Using token: gxaz8z.i3614r9j21ufz3a6
	I0108 21:58:21.779163  364439 out.go:204]   - Configuring RBAC rules ...
	I0108 21:58:21.779283  364439 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:58:21.779353  364439 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:58:21.779519  364439 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:58:21.779710  364439 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:58:21.779836  364439 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:58:21.779931  364439 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:58:21.780051  364439 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:58:21.780090  364439 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:58:21.780128  364439 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:58:21.780131  364439 kubeadm.go:322] 
	I0108 21:58:21.780209  364439 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:58:21.780212  364439 kubeadm.go:322] 
	I0108 21:58:21.780320  364439 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:58:21.780326  364439 kubeadm.go:322] 
	I0108 21:58:21.780365  364439 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:58:21.780450  364439 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:58:21.780526  364439 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:58:21.780533  364439 kubeadm.go:322] 
	I0108 21:58:21.780611  364439 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 21:58:21.780620  364439 kubeadm.go:322] 
	I0108 21:58:21.780716  364439 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:58:21.780722  364439 kubeadm.go:322] 
	I0108 21:58:21.780792  364439 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:58:21.780893  364439 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:58:21.780977  364439 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:58:21.780984  364439 kubeadm.go:322] 
	I0108 21:58:21.781098  364439 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:58:21.781200  364439 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:58:21.781206  364439 kubeadm.go:322] 
	I0108 21:58:21.781348  364439 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gxaz8z.i3614r9j21ufz3a6 \
	I0108 21:58:21.781504  364439 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 21:58:21.781538  364439 kubeadm.go:322] 	--control-plane 
	I0108 21:58:21.781544  364439 kubeadm.go:322] 
	I0108 21:58:21.781648  364439 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:58:21.781655  364439 kubeadm.go:322] 
	I0108 21:58:21.781753  364439 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gxaz8z.i3614r9j21ufz3a6 \
	I0108 21:58:21.781894  364439 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 21:58:21.781907  364439 cni.go:84] Creating CNI manager for ""
	I0108 21:58:21.781913  364439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:58:21.783948  364439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:58:21.785554  364439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:58:21.813886  364439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:58:21.889105  364439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:58:21.889189  364439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=scheduled-stop-306666 minikube.k8s.io/updated_at=2024_01_08T21_58_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:58:21.889199  364439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:58:22.278350  364439 kubeadm.go:1088] duration metric: took 389.259784ms to wait for elevateKubeSystemPrivileges.
	I0108 21:58:22.315070  364439 ops.go:34] apiserver oom_adj: -16
	I0108 21:58:22.315104  364439 kubeadm.go:406] StartCluster complete in 13.937520014s
	I0108 21:58:22.315129  364439 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:22.315209  364439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:58:22.316044  364439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:58:22.316305  364439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:58:22.316439  364439 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:58:22.316499  364439 config.go:182] Loaded profile config "scheduled-stop-306666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:58:22.316522  364439 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-306666"
	I0108 21:58:22.316530  364439 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-306666"
	I0108 21:58:22.316546  364439 addons.go:237] Setting addon storage-provisioner=true in "scheduled-stop-306666"
	I0108 21:58:22.316552  364439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-306666"
	I0108 21:58:22.316625  364439 host.go:66] Checking if "scheduled-stop-306666" exists ...
	I0108 21:58:22.316932  364439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:58:22.316984  364439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:58:22.317015  364439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:58:22.317034  364439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:58:22.333559  364439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0108 21:58:22.333623  364439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0108 21:58:22.334041  364439 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:58:22.334098  364439 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:58:22.334588  364439 main.go:141] libmachine: Using API Version  1
	I0108 21:58:22.334606  364439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:58:22.334736  364439 main.go:141] libmachine: Using API Version  1
	I0108 21:58:22.334753  364439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:58:22.335019  364439 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:58:22.335151  364439 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:58:22.335212  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetState
	I0108 21:58:22.335722  364439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:58:22.335750  364439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:58:22.339015  364439 addons.go:237] Setting addon default-storageclass=true in "scheduled-stop-306666"
	I0108 21:58:22.339067  364439 host.go:66] Checking if "scheduled-stop-306666" exists ...
	I0108 21:58:22.339587  364439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:58:22.339630  364439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:58:22.352704  364439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40229
	I0108 21:58:22.353150  364439 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:58:22.353651  364439 main.go:141] libmachine: Using API Version  1
	I0108 21:58:22.353661  364439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:58:22.353915  364439 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:58:22.354099  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetState
	I0108 21:58:22.355928  364439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
	I0108 21:58:22.355930  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:58:22.358286  364439 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:58:22.356375  364439 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:58:22.360154  364439 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:58:22.360164  364439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:58:22.360178  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:58:22.360547  364439 main.go:141] libmachine: Using API Version  1
	I0108 21:58:22.360566  364439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:58:22.360846  364439 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:58:22.361401  364439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:58:22.361429  364439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:58:22.364139  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:58:22.364543  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:58:22.364571  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:58:22.364957  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:58:22.365293  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:58:22.365508  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:58:22.365714  364439 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa Username:docker}
	I0108 21:58:22.378232  364439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I0108 21:58:22.378786  364439 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:58:22.379440  364439 main.go:141] libmachine: Using API Version  1
	I0108 21:58:22.379473  364439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:58:22.379926  364439 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:58:22.380163  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetState
	I0108 21:58:22.382345  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .DriverName
	I0108 21:58:22.382733  364439 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:58:22.382746  364439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:58:22.382771  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHHostname
	I0108 21:58:22.385892  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:58:22.386420  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:c8:28", ip: ""} in network mk-scheduled-stop-306666: {Iface:virbr1 ExpiryTime:2024-01-08 22:57:51 +0000 UTC Type:0 Mac:52:54:00:00:c8:28 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:scheduled-stop-306666 Clientid:01:52:54:00:00:c8:28}
	I0108 21:58:22.386460  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | domain scheduled-stop-306666 has defined IP address 192.168.39.45 and MAC address 52:54:00:00:c8:28 in network mk-scheduled-stop-306666
	I0108 21:58:22.386721  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHPort
	I0108 21:58:22.386982  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHKeyPath
	I0108 21:58:22.387171  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .GetSSHUsername
	I0108 21:58:22.387322  364439 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/scheduled-stop-306666/id_rsa Username:docker}
	I0108 21:58:22.445493  364439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:58:22.498529  364439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:58:22.540494  364439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:58:22.841409  364439 kapi.go:248] "coredns" deployment in "kube-system" namespace and "scheduled-stop-306666" context rescaled to 1 replicas
	I0108 21:58:22.841449  364439 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:58:22.843585  364439 out.go:177] * Verifying Kubernetes components...
	I0108 21:58:22.845110  364439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:58:23.747573  364439 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.302033313s)
	I0108 21:58:23.747602  364439 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 21:58:24.025121  364439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.526516362s)
	I0108 21:58:24.025159  364439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.484636959s)
	I0108 21:58:24.025203  364439 main.go:141] libmachine: Making call to close driver server
	I0108 21:58:24.025197  364439 main.go:141] libmachine: Making call to close driver server
	I0108 21:58:24.025211  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .Close
	I0108 21:58:24.025215  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .Close
	I0108 21:58:24.025229  364439 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.180098043s)
	I0108 21:58:24.025630  364439 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:58:24.025641  364439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:58:24.025651  364439 main.go:141] libmachine: Making call to close driver server
	I0108 21:58:24.025662  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .Close
	I0108 21:58:24.025731  364439 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:58:24.025754  364439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:58:24.025764  364439 main.go:141] libmachine: Making call to close driver server
	I0108 21:58:24.025771  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .Close
	I0108 21:58:24.025978  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Closing plugin on server side
	I0108 21:58:24.026012  364439 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:58:24.026019  364439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:58:24.026381  364439 main.go:141] libmachine: (scheduled-stop-306666) DBG | Closing plugin on server side
	I0108 21:58:24.026482  364439 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:58:24.026495  364439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:58:24.026671  364439 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:58:24.026750  364439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:58:24.044516  364439 main.go:141] libmachine: Making call to close driver server
	I0108 21:58:24.044530  364439 main.go:141] libmachine: (scheduled-stop-306666) Calling .Close
	I0108 21:58:24.044884  364439 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:58:24.044898  364439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:58:24.047899  364439 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:58:24.049333  364439 addons.go:508] enable addons completed in 1.732908155s: enabled=[storage-provisioner default-storageclass]
	I0108 21:58:24.052642  364439 api_server.go:72] duration metric: took 1.211155471s to wait for apiserver process to appear ...
	I0108 21:58:24.052658  364439 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:58:24.052681  364439 api_server.go:253] Checking apiserver healthz at https://192.168.39.45:8443/healthz ...
	I0108 21:58:24.062475  364439 api_server.go:279] https://192.168.39.45:8443/healthz returned 200:
	ok
	I0108 21:58:24.064487  364439 api_server.go:141] control plane version: v1.28.4
	I0108 21:58:24.064512  364439 api_server.go:131] duration metric: took 11.848023ms to wait for apiserver health ...
	I0108 21:58:24.064522  364439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:58:24.071489  364439 system_pods.go:59] 5 kube-system pods found
	I0108 21:58:24.071509  364439 system_pods.go:61] "etcd-scheduled-stop-306666" [09bbfb46-aaea-4865-a65b-a5524d7ee970] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:58:24.071515  364439 system_pods.go:61] "kube-apiserver-scheduled-stop-306666" [32e7aa0d-3e8b-49fd-8e20-198dcbbf1ba8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:58:24.071522  364439 system_pods.go:61] "kube-controller-manager-scheduled-stop-306666" [234e5573-daa3-43af-9a2f-40bdd4994268] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:58:24.071528  364439 system_pods.go:61] "kube-scheduler-scheduled-stop-306666" [1e0d8b3f-a333-4ce0-84e5-6651806757db] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:58:24.071534  364439 system_pods.go:61] "storage-provisioner" [8e3f6664-59ae-417c-b633-c82c16d0d957] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0108 21:58:24.071565  364439 system_pods.go:74] duration metric: took 7.013832ms to wait for pod list to return data ...
	I0108 21:58:24.071574  364439 kubeadm.go:581] duration metric: took 1.230096746s to wait for : map[apiserver:true system_pods:true] ...
	I0108 21:58:24.071586  364439 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:58:24.075566  364439 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:58:24.075586  364439 node_conditions.go:123] node cpu capacity is 2
	I0108 21:58:24.075597  364439 node_conditions.go:105] duration metric: took 4.007211ms to run NodePressure ...
	I0108 21:58:24.075607  364439 start.go:228] waiting for startup goroutines ...
	I0108 21:58:24.075612  364439 start.go:233] waiting for cluster config update ...
	I0108 21:58:24.075621  364439 start.go:242] writing updated cluster config ...
	I0108 21:58:24.075864  364439 ssh_runner.go:195] Run: rm -f paused
	I0108 21:58:24.130430  364439 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:58:24.132627  364439 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-306666" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:57:48 UTC, ends at Mon 2024-01-08 21:58:25 UTC. --
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.355965315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704751105355950031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=d1e97831-e79b-4404-89e4-b2c8b57ecedc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.356751824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=235f60d9-3a75-4636-9b4c-8f64133dc7d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.356797833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=235f60d9-3a75-4636-9b4c-8f64133dc7d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.356911270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12a8cd539ac403759e5ffa6a380d982e9711951cb49a3588eaa9476d68b42307,PodSandboxId:7a3ec24abf14bd1a2dcc0fa1a6c51719e9dc1fc2f0083275478687de29a96ba2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704751094296419931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8a67bb497dedc18b469ddb51c7aaf8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b2da4811e0b118f5c42ca3ff8d215b0e02d4cee089010fc546e27f3f888ea5,PodSandboxId:50165e30121c9e125f1d70e617b2c9dd1331c3ee261a572f8662a44a9ee8f4e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704751094203330647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd33eda8adf8faa318383b2610abfb2e,},Annotations:map[string]string{io.kubernetes.container.hash: c5e1fb44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9797c26ba77cf427b28da965420a1433ebfad8b8b85c6a9977040227aa2d2eeb,PodSandboxId:b12ca1bc0bd147f1283084c149dacebb334851a81c0e9d521d69e9ec62bf692b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704751093694810901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8c9a1f91774bc57956b1a09e52191,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ea95b66ab44448e551fa649fcb583a9b5a50d6a1579f1ba6a29430cb0aae4f,PodSandboxId:0c384a0f258067a789a0c82fbd9f4d4e37c1f671e61756161441bef42bd7226c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704751093551044230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c602f8861fd79f4cfa600e920e78d4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 21ab4526,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=235f60d9-3a75-4636-9b4c-8f64133dc7d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.397663558Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=016cec5e-6a0e-4086-a7a4-99445369fc18 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.397724803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=016cec5e-6a0e-4086-a7a4-99445369fc18 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.399997399Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=65703944-6750-4609-bb66-689e3bf336ae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.400439643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704751105400423984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=65703944-6750-4609-bb66-689e3bf336ae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.401588311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ab06db7-e8cb-4b4d-930c-00e9f570c619 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.401638448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ab06db7-e8cb-4b4d-930c-00e9f570c619 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.401765494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12a8cd539ac403759e5ffa6a380d982e9711951cb49a3588eaa9476d68b42307,PodSandboxId:7a3ec24abf14bd1a2dcc0fa1a6c51719e9dc1fc2f0083275478687de29a96ba2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704751094296419931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8a67bb497dedc18b469ddb51c7aaf8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b2da4811e0b118f5c42ca3ff8d215b0e02d4cee089010fc546e27f3f888ea5,PodSandboxId:50165e30121c9e125f1d70e617b2c9dd1331c3ee261a572f8662a44a9ee8f4e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704751094203330647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd33eda8adf8faa318383b2610abfb2e,},Annotations:map[string]string{io.kubernetes.container.hash: c5e1fb44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9797c26ba77cf427b28da965420a1433ebfad8b8b85c6a9977040227aa2d2eeb,PodSandboxId:b12ca1bc0bd147f1283084c149dacebb334851a81c0e9d521d69e9ec62bf692b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704751093694810901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8c9a1f91774bc57956b1a09e52191,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ea95b66ab44448e551fa649fcb583a9b5a50d6a1579f1ba6a29430cb0aae4f,PodSandboxId:0c384a0f258067a789a0c82fbd9f4d4e37c1f671e61756161441bef42bd7226c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704751093551044230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c602f8861fd79f4cfa600e920e78d4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 21ab4526,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ab06db7-e8cb-4b4d-930c-00e9f570c619 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.446175786Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b2fd7c0f-0c78-456c-b2d4-b4d34ea80e82 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.446312455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b2fd7c0f-0c78-456c-b2d4-b4d34ea80e82 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.448755620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=59dcb182-43d2-4011-a4df-fc22d320a0e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.449144784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704751105449127468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=59dcb182-43d2-4011-a4df-fc22d320a0e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.449879099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=366a8018-4d9f-439f-b972-60add705569a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.449931076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=366a8018-4d9f-439f-b972-60add705569a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.450067428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12a8cd539ac403759e5ffa6a380d982e9711951cb49a3588eaa9476d68b42307,PodSandboxId:7a3ec24abf14bd1a2dcc0fa1a6c51719e9dc1fc2f0083275478687de29a96ba2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704751094296419931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8a67bb497dedc18b469ddb51c7aaf8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b2da4811e0b118f5c42ca3ff8d215b0e02d4cee089010fc546e27f3f888ea5,PodSandboxId:50165e30121c9e125f1d70e617b2c9dd1331c3ee261a572f8662a44a9ee8f4e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704751094203330647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd33eda8adf8faa318383b2610abfb2e,},Annotations:map[string]string{io.kubernetes.container.hash: c5e1fb44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9797c26ba77cf427b28da965420a1433ebfad8b8b85c6a9977040227aa2d2eeb,PodSandboxId:b12ca1bc0bd147f1283084c149dacebb334851a81c0e9d521d69e9ec62bf692b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704751093694810901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8c9a1f91774bc57956b1a09e52191,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ea95b66ab44448e551fa649fcb583a9b5a50d6a1579f1ba6a29430cb0aae4f,PodSandboxId:0c384a0f258067a789a0c82fbd9f4d4e37c1f671e61756161441bef42bd7226c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704751093551044230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c602f8861fd79f4cfa600e920e78d4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 21ab4526,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=366a8018-4d9f-439f-b972-60add705569a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.490669062Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5771679f-88b7-4d23-bd30-33086436cb41 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.490763185Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5771679f-88b7-4d23-bd30-33086436cb41 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.492294340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=123889e0-effb-4c71-a9e5-81f4dd8c1502 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.492773014Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704751105492757269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=123889e0-effb-4c71-a9e5-81f4dd8c1502 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.493445420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cee703ea-04c3-4e5b-b090-f36c9d4c0381 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.493522229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cee703ea-04c3-4e5b-b090-f36c9d4c0381 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:58:25 scheduled-stop-306666 crio[714]: time="2024-01-08 21:58:25.493644575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:12a8cd539ac403759e5ffa6a380d982e9711951cb49a3588eaa9476d68b42307,PodSandboxId:7a3ec24abf14bd1a2dcc0fa1a6c51719e9dc1fc2f0083275478687de29a96ba2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704751094296419931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8a67bb497dedc18b469ddb51c7aaf8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b2da4811e0b118f5c42ca3ff8d215b0e02d4cee089010fc546e27f3f888ea5,PodSandboxId:50165e30121c9e125f1d70e617b2c9dd1331c3ee261a572f8662a44a9ee8f4e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704751094203330647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd33eda8adf8faa318383b2610abfb2e,},Annotations:map[string]string{io.kubernetes.container.hash: c5e1fb44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9797c26ba77cf427b28da965420a1433ebfad8b8b85c6a9977040227aa2d2eeb,PodSandboxId:b12ca1bc0bd147f1283084c149dacebb334851a81c0e9d521d69e9ec62bf692b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704751093694810901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8c9a1f91774bc57956b1a09e52191,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ea95b66ab44448e551fa649fcb583a9b5a50d6a1579f1ba6a29430cb0aae4f,PodSandboxId:0c384a0f258067a789a0c82fbd9f4d4e37c1f671e61756161441bef42bd7226c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704751093551044230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-scheduled-stop-306666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c602f8861fd79f4cfa600e920e78d4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 21ab4526,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cee703ea-04c3-4e5b-b090-f36c9d4c0381 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	12a8cd539ac40       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   11 seconds ago      Running             kube-scheduler            0                   7a3ec24abf14b       kube-scheduler-scheduled-stop-306666
	b3b2da4811e0b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   11 seconds ago      Running             etcd                      0                   50165e30121c9       etcd-scheduled-stop-306666
	9797c26ba77cf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   11 seconds ago      Running             kube-controller-manager   0                   b12ca1bc0bd14       kube-controller-manager-scheduled-stop-306666
	a9ea95b66ab44       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   12 seconds ago      Running             kube-apiserver            0                   0c384a0f25806       kube-apiserver-scheduled-stop-306666
	
	
	==> describe nodes <==
	Name:               scheduled-stop-306666
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=scheduled-stop-306666
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=scheduled-stop-306666
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_58_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:58:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-306666
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:58:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:58:22 +0000   Mon, 08 Jan 2024 21:58:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:58:22 +0000   Mon, 08 Jan 2024 21:58:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:58:22 +0000   Mon, 08 Jan 2024 21:58:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:58:22 +0000   Mon, 08 Jan 2024 21:58:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.45
	  Hostname:    scheduled-stop-306666
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdc1f3f39e9940e08b8c482567cad64f
	  System UUID:                cdc1f3f3-9e99-40e0-8b8c-482567cad64f
	  Boot ID:                    784ac95e-3129-4332-b750-2432cd88d715
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-306666                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-scheduled-stop-306666             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-controller-manager-scheduled-stop-306666    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-scheduled-stop-306666             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (5%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet  Node scheduled-stop-306666 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet  Node scheduled-stop-306666 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x7 over 13s)  kubelet  Node scheduled-stop-306666 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 4s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-306666 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-306666 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-306666 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet  Node scheduled-stop-306666 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan 8 21:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.516743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.834420] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143563] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.119308] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan 8 21:58] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.113265] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.146643] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.132687] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.263806] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +10.673243] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[  +9.818752] systemd-fstab-generator[1257]: Ignoring "noauto" for root device
	
	
	==> etcd [b3b2da4811e0b118f5c42ca3ff8d215b0e02d4cee089010fc546e27f3f888ea5] <==
	{"level":"info","ts":"2024-01-08T21:58:15.961689Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2024-01-08T21:58:15.961769Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.45:2380"}
	{"level":"info","ts":"2024-01-08T21:58:15.96162Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T21:58:15.963505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce switched to configuration voters=(15242124114575169998)"}
	{"level":"info","ts":"2024-01-08T21:58:15.965478Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"34c61d36ecc5c83e","local-member-id":"d386e7203fab19ce","added-peer-id":"d386e7203fab19ce","added-peer-peer-urls":["https://192.168.39.45:2380"]}
	{"level":"info","ts":"2024-01-08T21:58:15.965661Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d386e7203fab19ce","initial-advertise-peer-urls":["https://192.168.39.45:2380"],"listen-peer-urls":["https://192.168.39.45:2380"],"advertise-client-urls":["https://192.168.39.45:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.45:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T21:58:15.965719Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:58:16.10333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T21:58:16.103479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T21:58:16.103518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgPreVoteResp from d386e7203fab19ce at term 1"}
	{"level":"info","ts":"2024-01-08T21:58:16.103549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:58:16.103574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce received MsgVoteResp from d386e7203fab19ce at term 2"}
	{"level":"info","ts":"2024-01-08T21:58:16.103602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d386e7203fab19ce became leader at term 2"}
	{"level":"info","ts":"2024-01-08T21:58:16.103629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d386e7203fab19ce elected leader d386e7203fab19ce at term 2"}
	{"level":"info","ts":"2024-01-08T21:58:16.108437Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:58:16.112566Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d386e7203fab19ce","local-member-attributes":"{Name:scheduled-stop-306666 ClientURLs:[https://192.168.39.45:2379]}","request-path":"/0/members/d386e7203fab19ce/attributes","cluster-id":"34c61d36ecc5c83e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:58:16.112653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:58:16.117489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:58:16.119339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:58:16.120409Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"34c61d36ecc5c83e","local-member-id":"d386e7203fab19ce","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:58:16.120471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.45:2379"}
	{"level":"info","ts":"2024-01-08T21:58:16.120699Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:58:16.120745Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:58:16.12086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:58:16.120888Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:58:25 up 0 min,  0 users,  load average: 1.49, 0.40, 0.14
	Linux scheduled-stop-306666 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a9ea95b66ab44448e551fa649fcb583a9b5a50d6a1579f1ba6a29430cb0aae4f] <==
	I0108 21:58:18.184852       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 21:58:18.184859       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 21:58:18.184866       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:58:18.220018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:58:18.222656       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 21:58:18.223299       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:58:18.224069       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 21:58:18.226387       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 21:58:18.226450       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 21:58:18.228278       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 21:58:18.230120       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 21:58:18.300266       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:58:19.030531       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:58:19.042681       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:58:19.042761       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:58:19.892869       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:58:19.975362       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:58:20.079551       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 21:58:20.090538       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.45]
	I0108 21:58:20.092079       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 21:58:20.099972       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:58:20.150692       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:58:21.655956       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:58:21.686827       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 21:58:21.712518       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [9797c26ba77cf427b28da965420a1433ebfad8b8b85c6a9977040227aa2d2eeb] <==
	I0108 21:58:20.189063       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0108 21:58:20.189099       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0108 21:58:20.191391       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0108 21:58:20.191494       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0108 21:58:20.191660       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0108 21:58:20.191508       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0108 21:58:20.203184       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0108 21:58:20.203394       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0108 21:58:20.203404       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0108 21:58:20.203426       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0108 21:58:20.229131       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0108 21:58:20.229409       1 horizontal.go:200] "Starting HPA controller"
	I0108 21:58:20.229449       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0108 21:58:20.239916       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0108 21:58:20.240154       1 stateful_set.go:161] "Starting stateful set controller"
	I0108 21:58:20.240258       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0108 21:58:20.241936       1 shared_informer.go:318] Caches are synced for tokens
	I0108 21:58:20.243637       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0108 21:58:20.243677       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0108 21:58:20.244010       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0108 21:58:20.255536       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0108 21:58:20.255603       1 ttl_controller.go:124] "Starting TTL controller"
	I0108 21:58:20.255906       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0108 21:58:20.275737       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0108 21:58:20.275959       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	
	
	==> kube-scheduler [12a8cd539ac403759e5ffa6a380d982e9711951cb49a3588eaa9476d68b42307] <==
	W0108 21:58:19.134519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:58:19.134583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:58:19.134630       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:58:19.134637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:58:19.156694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:58:19.156767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:58:19.220169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:58:19.220295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:58:19.317474       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:58:19.317535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:58:19.454285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:58:19.454338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:58:19.539287       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:58:19.539340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:58:19.582419       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:58:19.582483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 21:58:19.585130       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:58:19.585239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:58:19.623334       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:58:19.623388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:58:19.630934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:58:19.630988       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:58:19.682678       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:58:19.682728       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 21:58:22.062815       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:57:48 UTC, ends at Mon 2024-01-08 21:58:25 UTC. --
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.151796    1264 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.176896    1264 topology_manager.go:215] "Topology Admit Handler" podUID="cd33eda8adf8faa318383b2610abfb2e" podNamespace="kube-system" podName="etcd-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.177120    1264 topology_manager.go:215] "Topology Admit Handler" podUID="c602f8861fd79f4cfa600e920e78d4a5" podNamespace="kube-system" podName="kube-apiserver-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.177256    1264 topology_manager.go:215] "Topology Admit Handler" podUID="e5b8c9a1f91774bc57956b1a09e52191" podNamespace="kube-system" podName="kube-controller-manager-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.177306    1264 topology_manager.go:215] "Topology Admit Handler" podUID="4e8a67bb497dedc18b469ddb51c7aaf8" podNamespace="kube-system" podName="kube-scheduler-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: E0108 21:58:22.202591    1264 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-scheduled-stop-306666\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: E0108 21:58:22.210640    1264 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-scheduled-stop-306666\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.237847    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c602f8861fd79f4cfa600e920e78d4a5-ca-certs\") pod \"kube-apiserver-scheduled-stop-306666\" (UID: \"c602f8861fd79f4cfa600e920e78d4a5\") " pod="kube-system/kube-apiserver-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.237927    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c602f8861fd79f4cfa600e920e78d4a5-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-306666\" (UID: \"c602f8861fd79f4cfa600e920e78d4a5\") " pod="kube-system/kube-apiserver-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.237956    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5b8c9a1f91774bc57956b1a09e52191-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-306666\" (UID: \"e5b8c9a1f91774bc57956b1a09e52191\") " pod="kube-system/kube-controller-manager-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.237978    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5b8c9a1f91774bc57956b1a09e52191-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-306666\" (UID: \"e5b8c9a1f91774bc57956b1a09e52191\") " pod="kube-system/kube-controller-manager-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.238010    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5b8c9a1f91774bc57956b1a09e52191-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-306666\" (UID: \"e5b8c9a1f91774bc57956b1a09e52191\") " pod="kube-system/kube-controller-manager-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.238032    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5b8c9a1f91774bc57956b1a09e52191-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-306666\" (UID: \"e5b8c9a1f91774bc57956b1a09e52191\") " pod="kube-system/kube-controller-manager-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.238051    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e8a67bb497dedc18b469ddb51c7aaf8-kubeconfig\") pod \"kube-scheduler-scheduled-stop-306666\" (UID: \"4e8a67bb497dedc18b469ddb51c7aaf8\") " pod="kube-system/kube-scheduler-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.238076    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/cd33eda8adf8faa318383b2610abfb2e-etcd-certs\") pod \"etcd-scheduled-stop-306666\" (UID: \"cd33eda8adf8faa318383b2610abfb2e\") " pod="kube-system/etcd-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.238097    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/cd33eda8adf8faa318383b2610abfb2e-etcd-data\") pod \"etcd-scheduled-stop-306666\" (UID: \"cd33eda8adf8faa318383b2610abfb2e\") " pod="kube-system/etcd-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.238125    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c602f8861fd79f4cfa600e920e78d4a5-k8s-certs\") pod \"kube-apiserver-scheduled-stop-306666\" (UID: \"c602f8861fd79f4cfa600e920e78d4a5\") " pod="kube-system/kube-apiserver-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.238147    1264 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5b8c9a1f91774bc57956b1a09e52191-ca-certs\") pod \"kube-controller-manager-scheduled-stop-306666\" (UID: \"e5b8c9a1f91774bc57956b1a09e52191\") " pod="kube-system/kube-controller-manager-scheduled-stop-306666"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.258843    1264 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.799466    1264 apiserver.go:52] "Watching apiserver"
	Jan 08 21:58:22 scheduled-stop-306666 kubelet[1264]: I0108 21:58:22.829732    1264 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 08 21:58:23 scheduled-stop-306666 kubelet[1264]: I0108 21:58:23.167709    1264 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-306666" podStartSLOduration=1.167619454 podCreationTimestamp="2024-01-08 21:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:58:23.141674667 +0000 UTC m=+1.501179440" watchObservedRunningTime="2024-01-08 21:58:23.167619454 +0000 UTC m=+1.527124218"
	Jan 08 21:58:23 scheduled-stop-306666 kubelet[1264]: I0108 21:58:23.204580    1264 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-306666" podStartSLOduration=3.204542041 podCreationTimestamp="2024-01-08 21:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:58:23.167990841 +0000 UTC m=+1.527495614" watchObservedRunningTime="2024-01-08 21:58:23.204542041 +0000 UTC m=+1.564046814"
	Jan 08 21:58:23 scheduled-stop-306666 kubelet[1264]: I0108 21:58:23.253991    1264 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-306666" podStartSLOduration=1.253946994 podCreationTimestamp="2024-01-08 21:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:58:23.206057557 +0000 UTC m=+1.565562330" watchObservedRunningTime="2024-01-08 21:58:23.253946994 +0000 UTC m=+1.613451761"
	Jan 08 21:58:23 scheduled-stop-306666 kubelet[1264]: I0108 21:58:23.270841    1264 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-306666" podStartSLOduration=3.270778339 podCreationTimestamp="2024-01-08 21:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:58:23.254849297 +0000 UTC m=+1.614354069" watchObservedRunningTime="2024-01-08 21:58:23.270778339 +0000 UTC m=+1.630283106"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p scheduled-stop-306666 -n scheduled-stop-306666
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-306666 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-306666 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-306666 describe pod storage-provisioner: exit status 1 (68.190792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-306666 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-306666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-306666
--- FAIL: TestScheduledStopUnix (52.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (161.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1003744269.exe start -p running-upgrade-882819 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0108 21:59:44.964022  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1003744269.exe start -p running-upgrade-882819 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m25.893145745s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-882819 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-882819 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (12.907135066s)

                                                
                                                
-- stdout --
	* [running-upgrade-882819] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-882819 in cluster running-upgrade-882819
	* Updating the running kvm2 "running-upgrade-882819" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:00:53.850087  367140 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:00:53.850462  367140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:00:53.850475  367140 out.go:309] Setting ErrFile to fd 2...
	I0108 22:00:53.850483  367140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:00:53.850744  367140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:00:53.851411  367140 out.go:303] Setting JSON to false
	I0108 22:00:53.852465  367140 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9780,"bootTime":1704741474,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:00:53.852541  367140 start.go:138] virtualization: kvm guest
	I0108 22:00:53.923555  367140 out.go:177] * [running-upgrade-882819] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:00:53.997547  367140 notify.go:220] Checking for updates...
	I0108 22:00:54.018304  367140 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:00:54.098934  367140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:00:54.170687  367140 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:00:54.189258  367140 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:00:54.190887  367140 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:00:54.192516  367140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:00:54.195329  367140 config.go:182] Loaded profile config "running-upgrade-882819": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 22:00:54.195380  367140 start_flags.go:691] config upgrade: Driver=kvm2
	I0108 22:00:54.195399  367140 start_flags.go:703] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 22:00:54.195488  367140 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/running-upgrade-882819/config.json ...
	I0108 22:00:54.196411  367140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:00:54.196519  367140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:00:54.214623  367140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40727
	I0108 22:00:54.215215  367140 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:00:54.216083  367140 main.go:141] libmachine: Using API Version  1
	I0108 22:00:54.216131  367140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:00:54.216696  367140 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:00:54.216977  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:00:54.219572  367140 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 22:00:54.221409  367140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:00:54.221927  367140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:00:54.221992  367140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:00:54.238990  367140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0108 22:00:54.239509  367140 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:00:54.240139  367140 main.go:141] libmachine: Using API Version  1
	I0108 22:00:54.240169  367140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:00:54.240538  367140 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:00:54.240764  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:00:54.284814  367140 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 22:00:54.286436  367140 start.go:298] selected driver: kvm2
	I0108 22:00:54.286471  367140 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-882819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.117 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs:}
	I0108 22:00:54.286631  367140 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:00:54.287678  367140 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.287785  367140 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:00:54.308751  367140 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:00:54.309266  367140 cni.go:84] Creating CNI manager for ""
	I0108 22:00:54.309290  367140 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 22:00:54.309310  367140 start_flags.go:321] config:
	{Name:running-upgrade-882819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.117 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:00:54.309542  367140 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.311683  367140 out.go:177] * Starting control plane node running-upgrade-882819 in cluster running-upgrade-882819
	I0108 22:00:54.313535  367140 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0108 22:00:54.343259  367140 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 22:00:54.343452  367140 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/running-upgrade-882819/config.json ...
	I0108 22:00:54.343554  367140 cache.go:107] acquiring lock: {Name:mkb93871d157edc8cee097ed653acb14e279f5bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.343598  367140 cache.go:107] acquiring lock: {Name:mk17bb5af0dd85c51185289aeb0c9932c18cb5ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.343587  367140 cache.go:107] acquiring lock: {Name:mk2c3cbc417bd4df12ceac48445116766e9dc8e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.343721  367140 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 22:00:54.343699  367140 cache.go:107] acquiring lock: {Name:mk94aaf8b869856e4ec497a2b41d5214c461adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.343762  367140 cache.go:107] acquiring lock: {Name:mkf070243f3263bb3b781304f8c05f5b4976ce8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.343771  367140 cache.go:107] acquiring lock: {Name:mk8a55520960bf602ab8934ad630b95831919d2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.343826  367140 start.go:365] acquiring machines lock for running-upgrade-882819: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:00:54.343844  367140 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0108 22:00:54.343801  367140 cache.go:107] acquiring lock: {Name:mk66f9841a81ec774a3502a00dcdffce30d18e4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.343544  367140 cache.go:107] acquiring lock: {Name:mk1cc97b051c562d689b486514840c0f6d890850 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:00:54.343917  367140 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0108 22:00:54.343978  367140 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0108 22:00:54.343986  367140 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 22:00:54.343996  367140 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0108 22:00:54.344009  367140 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 477.17µs
	I0108 22:00:54.344025  367140 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 22:00:54.343917  367140 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0108 22:00:54.344062  367140 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 22:00:54.345256  367140 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 22:00:54.345275  367140 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0108 22:00:54.345269  367140 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 22:00:54.345284  367140 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0108 22:00:54.345256  367140 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0108 22:00:54.345350  367140 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0108 22:00:54.345300  367140 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0108 22:00:54.530458  367140 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 22:00:54.553485  367140 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 22:00:54.600896  367140 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0108 22:00:54.606496  367140 cache.go:157] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0108 22:00:54.606526  367140 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 262.928448ms
	I0108 22:00:54.606542  367140 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0108 22:00:54.612711  367140 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0108 22:00:54.659749  367140 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0108 22:00:54.680293  367140 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0108 22:00:54.766152  367140 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0108 22:00:55.127031  367140 cache.go:157] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0108 22:00:55.127066  367140 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 783.299826ms
	I0108 22:00:55.127083  367140 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0108 22:00:55.427806  367140 cache.go:157] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0108 22:00:55.427851  367140 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.084132675s
	I0108 22:00:55.427872  367140 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0108 22:00:55.643493  367140 cache.go:157] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0108 22:00:55.643525  367140 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.299993253s
	I0108 22:00:55.643542  367140 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0108 22:00:55.656384  367140 cache.go:157] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0108 22:00:55.656413  367140 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.31281955s
	I0108 22:00:55.656427  367140 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0108 22:00:55.805207  367140 cache.go:157] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 22:00:55.805246  367140 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.46151229s
	I0108 22:00:55.805262  367140 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 22:00:56.104220  367140 cache.go:157] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0108 22:00:56.104264  367140 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.760685396s
	I0108 22:00:56.104282  367140 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0108 22:00:56.104306  367140 cache.go:87] Successfully saved all images to host disk.
	I0108 22:01:03.046514  367140 start.go:369] acquired machines lock for "running-upgrade-882819" in 8.702655234s
	I0108 22:01:03.046574  367140 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:01:03.046583  367140 fix.go:54] fixHost starting: minikube
	I0108 22:01:03.047049  367140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:01:03.047104  367140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:01:03.068789  367140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42065
	I0108 22:01:03.071509  367140 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:01:03.072129  367140 main.go:141] libmachine: Using API Version  1
	I0108 22:01:03.072155  367140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:01:03.072611  367140 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:01:03.072854  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:01:03.073041  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetState
	I0108 22:01:03.075128  367140 fix.go:102] recreateIfNeeded on running-upgrade-882819: state=Running err=<nil>
	W0108 22:01:03.075170  367140 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:01:03.077539  367140 out.go:177] * Updating the running kvm2 "running-upgrade-882819" VM ...
	I0108 22:01:03.079354  367140 machine.go:88] provisioning docker machine ...
	I0108 22:01:03.079422  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:01:03.079830  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetMachineName
	I0108 22:01:03.080219  367140 buildroot.go:166] provisioning hostname "running-upgrade-882819"
	I0108 22:01:03.080277  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetMachineName
	I0108 22:01:03.080534  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:03.085107  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.085775  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:03.085813  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.086109  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHPort
	I0108 22:01:03.086345  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:03.086586  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:03.086787  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHUsername
	I0108 22:01:03.087024  367140 main.go:141] libmachine: Using SSH client type: native
	I0108 22:01:03.087648  367140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I0108 22:01:03.087672  367140 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-882819 && echo "running-upgrade-882819" | sudo tee /etc/hostname
	I0108 22:01:03.231275  367140 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-882819
	
	I0108 22:01:03.231317  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:03.234881  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.235352  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:03.235464  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.235722  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHPort
	I0108 22:01:03.236019  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:03.236219  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:03.236479  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHUsername
	I0108 22:01:03.236757  367140 main.go:141] libmachine: Using SSH client type: native
	I0108 22:01:03.237174  367140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I0108 22:01:03.237196  367140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-882819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-882819/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-882819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:01:03.388244  367140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:01:03.388300  367140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:01:03.388327  367140 buildroot.go:174] setting up certificates
	I0108 22:01:03.388343  367140 provision.go:83] configureAuth start
	I0108 22:01:03.388358  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetMachineName
	I0108 22:01:03.388687  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetIP
	I0108 22:01:03.392168  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.392706  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:03.392751  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.393015  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:03.395907  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.396372  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:03.396410  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.396590  367140 provision.go:138] copyHostCerts
	I0108 22:01:03.396692  367140 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:01:03.396709  367140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:01:03.396792  367140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:01:03.397029  367140 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:01:03.397051  367140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:01:03.397090  367140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:01:03.397213  367140 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:01:03.397225  367140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:01:03.397254  367140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:01:03.397354  367140 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-882819 san=[192.168.61.117 192.168.61.117 localhost 127.0.0.1 minikube running-upgrade-882819]
	I0108 22:01:03.497138  367140 provision.go:172] copyRemoteCerts
	I0108 22:01:03.497272  367140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:01:03.497326  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:03.504752  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.505274  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:03.505332  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.505560  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHPort
	I0108 22:01:03.505810  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:03.506004  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHUsername
	I0108 22:01:03.506238  367140 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/running-upgrade-882819/id_rsa Username:docker}
	I0108 22:01:03.617231  367140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 22:01:03.652474  367140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:01:03.670996  367140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:01:03.691073  367140 provision.go:86] duration metric: configureAuth took 302.704049ms
	I0108 22:01:03.691119  367140 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:01:03.691386  367140 config.go:182] Loaded profile config "running-upgrade-882819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 22:01:03.691529  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:03.694669  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.695136  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:03.695161  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:03.695443  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHPort
	I0108 22:01:03.695695  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:03.695870  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:03.696035  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHUsername
	I0108 22:01:03.696207  367140 main.go:141] libmachine: Using SSH client type: native
	I0108 22:01:03.696559  367140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I0108 22:01:03.696595  367140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:01:04.336833  367140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:01:04.336864  367140 machine.go:91] provisioned docker machine in 1.257466774s
	I0108 22:01:04.336875  367140 start.go:300] post-start starting for "running-upgrade-882819" (driver="kvm2")
	I0108 22:01:04.336886  367140 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:01:04.336904  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:01:04.337308  367140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:01:04.337347  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:04.340495  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.340954  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:04.340996  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.341256  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHPort
	I0108 22:01:04.341592  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:04.341821  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHUsername
	I0108 22:01:04.342064  367140 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/running-upgrade-882819/id_rsa Username:docker}
	I0108 22:01:04.438492  367140 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:01:04.444523  367140 info.go:137] Remote host: Buildroot 2019.02.7
	I0108 22:01:04.444561  367140 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:01:04.444676  367140 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:01:04.444774  367140 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:01:04.444888  367140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:01:04.454196  367140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:01:04.485714  367140 start.go:303] post-start completed in 148.8179ms
	I0108 22:01:04.485762  367140 fix.go:56] fixHost completed within 1.439178368s
	I0108 22:01:04.485796  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:04.489816  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.490282  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:04.490342  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.490631  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHPort
	I0108 22:01:04.490936  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:04.491179  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:04.491410  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHUsername
	I0108 22:01:04.491646  367140 main.go:141] libmachine: Using SSH client type: native
	I0108 22:01:04.492054  367140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I0108 22:01:04.492068  367140 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 22:01:04.616378  367140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704751264.611921993
	
	I0108 22:01:04.616408  367140 fix.go:206] guest clock: 1704751264.611921993
	I0108 22:01:04.616416  367140 fix.go:219] Guest: 2024-01-08 22:01:04.611921993 +0000 UTC Remote: 2024-01-08 22:01:04.485768798 +0000 UTC m=+10.697845646 (delta=126.153195ms)
	I0108 22:01:04.616456  367140 fix.go:190] guest clock delta is within tolerance: 126.153195ms
	I0108 22:01:04.616463  367140 start.go:83] releasing machines lock for "running-upgrade-882819", held for 1.569917428s
	I0108 22:01:04.616500  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:01:04.616860  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetIP
	I0108 22:01:04.620137  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.620690  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:04.620742  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.621046  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:01:04.621842  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:01:04.622125  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .DriverName
	I0108 22:01:04.622256  367140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:01:04.622310  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:04.622402  367140 ssh_runner.go:195] Run: cat /version.json
	I0108 22:01:04.622445  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHHostname
	I0108 22:01:04.625788  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.626077  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.626230  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:04.626278  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.626587  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHPort
	I0108 22:01:04.626706  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:2a:5c", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:59:08 +0000 UTC Type:0 Mac:52:54:00:a5:2a:5c Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:running-upgrade-882819 Clientid:01:52:54:00:a5:2a:5c}
	I0108 22:01:04.626750  367140 main.go:141] libmachine: (running-upgrade-882819) DBG | domain running-upgrade-882819 has defined IP address 192.168.61.117 and MAC address 52:54:00:a5:2a:5c in network minikube-net
	I0108 22:01:04.626920  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:04.626937  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHPort
	I0108 22:01:04.627175  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHUsername
	I0108 22:01:04.627233  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHKeyPath
	I0108 22:01:04.627384  367140 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/running-upgrade-882819/id_rsa Username:docker}
	I0108 22:01:04.627468  367140 main.go:141] libmachine: (running-upgrade-882819) Calling .GetSSHUsername
	I0108 22:01:04.627613  367140 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/running-upgrade-882819/id_rsa Username:docker}
	W0108 22:01:04.737075  367140 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 22:01:04.737165  367140 ssh_runner.go:195] Run: systemctl --version
	I0108 22:01:04.745656  367140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:01:04.850069  367140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:01:04.860474  367140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:01:04.860579  367140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:01:04.869501  367140 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 22:01:04.869544  367140 start.go:475] detecting cgroup driver to use...
	I0108 22:01:04.869662  367140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:01:04.885831  367140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:01:04.899536  367140 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:01:04.899616  367140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:01:04.910189  367140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:01:04.922858  367140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 22:01:04.936626  367140 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 22:01:04.936707  367140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:01:05.178382  367140 docker.go:219] disabling docker service ...
	I0108 22:01:05.178456  367140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:01:06.249107  367140 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.070597548s)
	I0108 22:01:06.249200  367140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:01:06.265162  367140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:01:06.424603  367140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:01:06.604163  367140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:01:06.630921  367140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:01:06.662181  367140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 22:01:06.662274  367140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:01:06.675843  367140 out.go:177] 
	W0108 22:01:06.677519  367140 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 22:01:06.677560  367140 out.go:239] * 
	* 
	W0108 22:01:06.678594  367140 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:01:06.680550  367140 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-882819 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 22:01:06.702996497 +0000 UTC m=+3542.404118851
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-882819 -n running-upgrade-882819
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-882819 -n running-upgrade-882819: exit status 4 (333.623696ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:01:06.983098  367385 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-882819" does not appear in /home/jenkins/minikube-integration/17866-334768/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-882819" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-882819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-882819
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-882819: (1.413232485s)
--- FAIL: TestRunningBinaryUpgrade (161.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (305.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.416863373.exe start -p stopped-upgrade-878657 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.416863373.exe start -p stopped-upgrade-878657 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m28.629862497s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.416863373.exe -p stopped-upgrade-878657 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.416863373.exe -p stopped-upgrade-878657 stop: (1m32.805385803s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-878657 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-878657 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m4.365682319s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-878657] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-878657 in cluster stopped-upgrade-878657
	* Restarting existing kvm2 VM for "stopped-upgrade-878657" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:04:22.539591  371585 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:04:22.539796  371585 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:04:22.539809  371585 out.go:309] Setting ErrFile to fd 2...
	I0108 22:04:22.539817  371585 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:04:22.540146  371585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:04:22.540915  371585 out.go:303] Setting JSON to false
	I0108 22:04:22.542313  371585 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9989,"bootTime":1704741474,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:04:22.542431  371585 start.go:138] virtualization: kvm guest
	I0108 22:04:22.545711  371585 out.go:177] * [stopped-upgrade-878657] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:04:22.548822  371585 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:04:22.548875  371585 notify.go:220] Checking for updates...
	I0108 22:04:22.551943  371585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:04:22.553554  371585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:04:22.555240  371585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:04:22.556896  371585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:04:22.558343  371585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:04:22.560249  371585 config.go:182] Loaded profile config "stopped-upgrade-878657": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 22:04:22.560277  371585 start_flags.go:691] config upgrade: Driver=kvm2
	I0108 22:04:22.560292  371585 start_flags.go:703] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 22:04:22.560397  371585 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/stopped-upgrade-878657/config.json ...
	I0108 22:04:22.561249  371585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:04:22.561345  371585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:04:22.581326  371585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0108 22:04:22.582061  371585 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:04:22.582805  371585 main.go:141] libmachine: Using API Version  1
	I0108 22:04:22.582837  371585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:04:22.583229  371585 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:04:22.583448  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:04:22.585719  371585 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 22:04:22.587074  371585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:04:22.587659  371585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:04:22.587734  371585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:04:22.605684  371585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0108 22:04:22.606217  371585 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:04:22.606903  371585 main.go:141] libmachine: Using API Version  1
	I0108 22:04:22.606941  371585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:04:22.607399  371585 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:04:22.607665  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:04:22.650787  371585 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 22:04:22.652509  371585 start.go:298] selected driver: kvm2
	I0108 22:04:22.652544  371585 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-878657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.141 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs:}
	I0108 22:04:22.652702  371585 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:04:22.653898  371585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.654044  371585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:04:22.675965  371585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:04:22.676563  371585 cni.go:84] Creating CNI manager for ""
	I0108 22:04:22.676592  371585 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 22:04:22.676610  371585 start_flags.go:321] config:
	{Name:stopped-upgrade-878657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.141 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:04:22.676883  371585 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.678994  371585 out.go:177] * Starting control plane node stopped-upgrade-878657 in cluster stopped-upgrade-878657
	I0108 22:04:22.680429  371585 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0108 22:04:22.717632  371585 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 22:04:22.717850  371585 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/stopped-upgrade-878657/config.json ...
	I0108 22:04:22.717935  371585 cache.go:107] acquiring lock: {Name:mk1cc97b051c562d689b486514840c0f6d890850 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.717988  371585 cache.go:107] acquiring lock: {Name:mk94aaf8b869856e4ec497a2b41d5214c461adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.718065  371585 cache.go:107] acquiring lock: {Name:mk17bb5af0dd85c51185289aeb0c9932c18cb5ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.718050  371585 cache.go:107] acquiring lock: {Name:mkf070243f3263bb3b781304f8c05f5b4976ce8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.718104  371585 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0108 22:04:22.718073  371585 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 22:04:22.718106  371585 cache.go:107] acquiring lock: {Name:mk66f9841a81ec774a3502a00dcdffce30d18e4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.718122  371585 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 57.772µs
	I0108 22:04:22.718135  371585 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0108 22:04:22.718117  371585 cache.go:107] acquiring lock: {Name:mkb93871d157edc8cee097ed653acb14e279f5bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.718151  371585 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0108 22:04:22.718141  371585 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 215.903µs
	I0108 22:04:22.718155  371585 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0108 22:04:22.718163  371585 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 22:04:22.718160  371585 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 56.3µs
	I0108 22:04:22.718174  371585 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0108 22:04:22.718085  371585 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0108 22:04:22.718174  371585 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 146.386µs
	I0108 22:04:22.718188  371585 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0108 22:04:22.718186  371585 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 205.746µs
	I0108 22:04:22.717939  371585 cache.go:107] acquiring lock: {Name:mk2c3cbc417bd4df12ceac48445116766e9dc8e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.718195  371585 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0108 22:04:22.718169  371585 cache.go:107] acquiring lock: {Name:mk8a55520960bf602ab8934ad630b95831919d2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:04:22.718210  371585 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0108 22:04:22.718223  371585 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 125.513µs
	I0108 22:04:22.718229  371585 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 22:04:22.718235  371585 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0108 22:04:22.718237  371585 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 84.522µs
	I0108 22:04:22.718247  371585 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 22:04:22.718236  371585 cache.go:115] /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0108 22:04:22.718259  371585 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 337.518µs
	I0108 22:04:22.718273  371585 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0108 22:04:22.718281  371585 cache.go:87] Successfully saved all images to host disk.
	I0108 22:04:22.718327  371585 start.go:365] acquiring machines lock for stopped-upgrade-878657: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:04:34.752560  371585 start.go:369] acquired machines lock for "stopped-upgrade-878657" in 12.034201529s
	I0108 22:04:34.752624  371585 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:04:34.752634  371585 fix.go:54] fixHost starting: minikube
	I0108 22:04:34.753091  371585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:04:34.753145  371585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:04:34.771975  371585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41167
	I0108 22:04:34.772533  371585 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:04:34.773116  371585 main.go:141] libmachine: Using API Version  1
	I0108 22:04:34.773157  371585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:04:34.773558  371585 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:04:34.773759  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:04:34.773923  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetState
	I0108 22:04:34.776074  371585 fix.go:102] recreateIfNeeded on stopped-upgrade-878657: state=Stopped err=<nil>
	I0108 22:04:34.776118  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	W0108 22:04:34.776321  371585 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:04:34.779471  371585 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-878657" ...
	I0108 22:04:34.781359  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .Start
	I0108 22:04:34.781675  371585 main.go:141] libmachine: (stopped-upgrade-878657) Ensuring networks are active...
	I0108 22:04:34.782743  371585 main.go:141] libmachine: (stopped-upgrade-878657) Ensuring network default is active
	I0108 22:04:34.783177  371585 main.go:141] libmachine: (stopped-upgrade-878657) Ensuring network minikube-net is active
	I0108 22:04:34.783587  371585 main.go:141] libmachine: (stopped-upgrade-878657) Getting domain xml...
	I0108 22:04:34.784503  371585 main.go:141] libmachine: (stopped-upgrade-878657) Creating domain...
	I0108 22:04:36.263136  371585 main.go:141] libmachine: (stopped-upgrade-878657) Waiting to get IP...
	I0108 22:04:36.264741  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:36.265451  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:36.265572  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:36.265409  371750 retry.go:31] will retry after 216.783324ms: waiting for machine to come up
	I0108 22:04:36.484169  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:36.484883  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:36.484912  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:36.484826  371750 retry.go:31] will retry after 328.632352ms: waiting for machine to come up
	I0108 22:04:36.815931  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:36.816782  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:36.816818  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:36.816714  371750 retry.go:31] will retry after 300.73773ms: waiting for machine to come up
	I0108 22:04:37.119426  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:37.120159  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:37.120204  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:37.120103  371750 retry.go:31] will retry after 402.442988ms: waiting for machine to come up
	I0108 22:04:37.524825  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:37.525578  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:37.525617  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:37.525500  371750 retry.go:31] will retry after 644.253389ms: waiting for machine to come up
	I0108 22:04:38.171538  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:38.172192  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:38.172225  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:38.172123  371750 retry.go:31] will retry after 897.23282ms: waiting for machine to come up
	I0108 22:04:39.070777  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:39.071516  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:39.071561  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:39.071420  371750 retry.go:31] will retry after 980.332086ms: waiting for machine to come up
	I0108 22:04:40.053135  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:40.053809  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:40.053846  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:40.053730  371750 retry.go:31] will retry after 1.331254962s: waiting for machine to come up
	I0108 22:04:41.386683  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:41.387309  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:41.387345  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:41.387245  371750 retry.go:31] will retry after 1.462143728s: waiting for machine to come up
	I0108 22:04:42.852392  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:42.852984  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:42.853025  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:42.852922  371750 retry.go:31] will retry after 2.098498988s: waiting for machine to come up
	I0108 22:04:44.953302  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:44.953887  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:44.953921  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:44.953831  371750 retry.go:31] will retry after 2.677879227s: waiting for machine to come up
	I0108 22:04:47.634884  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:47.635691  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:47.635751  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:47.635610  371750 retry.go:31] will retry after 3.423432868s: waiting for machine to come up
	I0108 22:04:51.061637  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:51.062185  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:51.062237  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:51.062127  371750 retry.go:31] will retry after 4.109841825s: waiting for machine to come up
	I0108 22:04:55.173934  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:55.174509  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:55.174546  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:55.174449  371750 retry.go:31] will retry after 4.108938277s: waiting for machine to come up
	I0108 22:04:59.287956  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:04:59.288611  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:04:59.288645  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:04:59.288558  371750 retry.go:31] will retry after 6.061036745s: waiting for machine to come up
	I0108 22:05:05.351304  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:05.351735  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:05:05.351756  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:05:05.351698  371750 retry.go:31] will retry after 5.459229191s: waiting for machine to come up
	I0108 22:05:10.812907  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:10.813558  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | unable to find current IP address of domain stopped-upgrade-878657 in network minikube-net
	I0108 22:05:10.813605  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | I0108 22:05:10.813489  371750 retry.go:31] will retry after 8.160434877s: waiting for machine to come up
	I0108 22:05:18.977744  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:18.978267  371585 main.go:141] libmachine: (stopped-upgrade-878657) Found IP for machine: 192.168.61.141
	I0108 22:05:18.978292  371585 main.go:141] libmachine: (stopped-upgrade-878657) Reserving static IP address...
	I0108 22:05:18.978305  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has current primary IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:18.978730  371585 main.go:141] libmachine: (stopped-upgrade-878657) Reserved static IP address: 192.168.61.141
	I0108 22:05:18.978760  371585 main.go:141] libmachine: (stopped-upgrade-878657) Waiting for SSH to be available...
	I0108 22:05:18.978782  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "stopped-upgrade-878657", mac: "52:54:00:5a:d5:89", ip: "192.168.61.141"} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:18.978817  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-878657", mac: "52:54:00:5a:d5:89", ip: "192.168.61.141"}
	I0108 22:05:18.978831  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | Getting to WaitForSSH function...
	I0108 22:05:18.981432  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:18.981823  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:18.981855  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:18.981995  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | Using SSH client type: external
	I0108 22:05:18.982032  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/stopped-upgrade-878657/id_rsa (-rw-------)
	I0108 22:05:18.982080  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/stopped-upgrade-878657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:05:18.982093  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | About to run SSH command:
	I0108 22:05:18.982104  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | exit 0
	I0108 22:05:19.115577  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | SSH cmd err, output: <nil>: 
	I0108 22:05:19.116022  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetConfigRaw
	I0108 22:05:19.116727  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetIP
	I0108 22:05:19.119820  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.120235  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:19.120282  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.120523  371585 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/stopped-upgrade-878657/config.json ...
	I0108 22:05:19.120743  371585 machine.go:88] provisioning docker machine ...
	I0108 22:05:19.120764  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:05:19.120964  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetMachineName
	I0108 22:05:19.121111  371585 buildroot.go:166] provisioning hostname "stopped-upgrade-878657"
	I0108 22:05:19.121121  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetMachineName
	I0108 22:05:19.121298  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:19.123982  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.124470  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:19.124531  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.124719  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHPort
	I0108 22:05:19.124994  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:19.125214  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:19.125397  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHUsername
	I0108 22:05:19.125577  371585 main.go:141] libmachine: Using SSH client type: native
	I0108 22:05:19.125943  371585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I0108 22:05:19.125959  371585 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-878657 && echo "stopped-upgrade-878657" | sudo tee /etc/hostname
	I0108 22:05:19.260464  371585 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-878657
	
	I0108 22:05:19.260502  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:19.264172  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.264652  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:19.264689  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.264874  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHPort
	I0108 22:05:19.265168  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:19.265381  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:19.265492  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHUsername
	I0108 22:05:19.265696  371585 main.go:141] libmachine: Using SSH client type: native
	I0108 22:05:19.266086  371585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I0108 22:05:19.266113  371585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-878657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-878657/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-878657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:05:19.401455  371585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:05:19.401489  371585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:05:19.401511  371585 buildroot.go:174] setting up certificates
	I0108 22:05:19.401520  371585 provision.go:83] configureAuth start
	I0108 22:05:19.401531  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetMachineName
	I0108 22:05:19.401892  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetIP
	I0108 22:05:19.405581  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.406155  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:19.406194  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.406364  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:19.409125  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.409531  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:19.409561  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.409745  371585 provision.go:138] copyHostCerts
	I0108 22:05:19.409820  371585 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:05:19.409830  371585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:05:19.409903  371585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:05:19.410013  371585 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:05:19.410022  371585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:05:19.410064  371585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:05:19.410137  371585 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:05:19.410145  371585 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:05:19.410171  371585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:05:19.410236  371585 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-878657 san=[192.168.61.141 192.168.61.141 localhost 127.0.0.1 minikube stopped-upgrade-878657]
	I0108 22:05:19.521400  371585 provision.go:172] copyRemoteCerts
	I0108 22:05:19.521470  371585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:05:19.521505  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:19.524743  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.525215  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:19.525254  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.525461  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHPort
	I0108 22:05:19.525698  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:19.525909  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHUsername
	I0108 22:05:19.526120  371585 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/stopped-upgrade-878657/id_rsa Username:docker}
	I0108 22:05:19.618878  371585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:05:19.633958  371585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 22:05:19.650899  371585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:05:19.667662  371585 provision.go:86] duration metric: configureAuth took 266.1248ms
	I0108 22:05:19.667701  371585 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:05:19.667915  371585 config.go:182] Loaded profile config "stopped-upgrade-878657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 22:05:19.668035  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:19.670878  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.671345  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:19.671392  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:19.671627  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHPort
	I0108 22:05:19.671882  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:19.672103  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:19.672292  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHUsername
	I0108 22:05:19.672539  371585 main.go:141] libmachine: Using SSH client type: native
	I0108 22:05:19.672889  371585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I0108 22:05:19.672946  371585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:05:25.834253  371585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:05:25.834295  371585 machine.go:91] provisioned docker machine in 6.713536007s
	I0108 22:05:25.834310  371585 start.go:300] post-start starting for "stopped-upgrade-878657" (driver="kvm2")
	I0108 22:05:25.834326  371585 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:05:25.834350  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:05:25.834758  371585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:05:25.834793  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:25.837829  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:25.838343  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:25.838380  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:25.838663  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHPort
	I0108 22:05:25.838925  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:25.839122  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHUsername
	I0108 22:05:25.839287  371585 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/stopped-upgrade-878657/id_rsa Username:docker}
	I0108 22:05:25.931683  371585 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:05:25.936829  371585 info.go:137] Remote host: Buildroot 2019.02.7
	I0108 22:05:25.936881  371585 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:05:25.936990  371585 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:05:25.937103  371585 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:05:25.937220  371585 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:05:25.944401  371585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:05:25.960053  371585 start.go:303] post-start completed in 125.718879ms
	I0108 22:05:25.960092  371585 fix.go:56] fixHost completed within 51.207459713s
	I0108 22:05:25.960122  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:25.963208  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:25.963672  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:25.963711  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:25.963923  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHPort
	I0108 22:05:25.964176  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:25.964331  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:25.964521  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHUsername
	I0108 22:05:25.964732  371585 main.go:141] libmachine: Using SSH client type: native
	I0108 22:05:25.965218  371585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I0108 22:05:25.965234  371585 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 22:05:26.094689  371585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704751526.020232250
	
	I0108 22:05:26.094735  371585 fix.go:206] guest clock: 1704751526.020232250
	I0108 22:05:26.094750  371585 fix.go:219] Guest: 2024-01-08 22:05:26.02023225 +0000 UTC Remote: 2024-01-08 22:05:25.960096746 +0000 UTC m=+63.494800939 (delta=60.135504ms)
	I0108 22:05:26.094781  371585 fix.go:190] guest clock delta is within tolerance: 60.135504ms
	I0108 22:05:26.094788  371585 start.go:83] releasing machines lock for "stopped-upgrade-878657", held for 51.342189795s
	I0108 22:05:26.094836  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:05:26.095310  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetIP
	I0108 22:05:26.099143  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:26.099599  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:26.099644  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:26.099948  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:05:26.100755  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:05:26.101016  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .DriverName
	I0108 22:05:26.101137  371585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:05:26.101196  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:26.101301  371585 ssh_runner.go:195] Run: cat /version.json
	I0108 22:05:26.101333  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHHostname
	I0108 22:05:26.104429  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:26.104582  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:26.104849  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:26.104875  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:26.104904  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:d5:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 23:05:11 +0000 UTC Type:0 Mac:52:54:00:5a:d5:89 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:stopped-upgrade-878657 Clientid:01:52:54:00:5a:d5:89}
	I0108 22:05:26.104927  371585 main.go:141] libmachine: (stopped-upgrade-878657) DBG | domain stopped-upgrade-878657 has defined IP address 192.168.61.141 and MAC address 52:54:00:5a:d5:89 in network minikube-net
	I0108 22:05:26.105120  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHPort
	I0108 22:05:26.105457  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:26.105490  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHPort
	I0108 22:05:26.105643  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHUsername
	I0108 22:05:26.105693  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHKeyPath
	I0108 22:05:26.105812  371585 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/stopped-upgrade-878657/id_rsa Username:docker}
	I0108 22:05:26.106023  371585 main.go:141] libmachine: (stopped-upgrade-878657) Calling .GetSSHUsername
	I0108 22:05:26.106199  371585 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/stopped-upgrade-878657/id_rsa Username:docker}
	W0108 22:05:26.223326  371585 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 22:05:26.223466  371585 ssh_runner.go:195] Run: systemctl --version
	I0108 22:05:26.229027  371585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:05:26.381581  371585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:05:26.389000  371585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:05:26.389073  371585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:05:26.395573  371585 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 22:05:26.395606  371585 start.go:475] detecting cgroup driver to use...
	I0108 22:05:26.395719  371585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:05:26.407667  371585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:05:26.418365  371585 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:05:26.418460  371585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:05:26.427366  371585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:05:26.436181  371585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 22:05:26.447483  371585 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 22:05:26.447644  371585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:05:26.552208  371585 docker.go:219] disabling docker service ...
	I0108 22:05:26.552346  371585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:05:26.567878  371585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:05:26.578601  371585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:05:26.682353  371585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:05:26.771683  371585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:05:26.784627  371585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:05:26.801425  371585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 22:05:26.801520  371585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:05:26.811130  371585 out.go:177] 
	W0108 22:05:26.813050  371585 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 22:05:26.813069  371585 out.go:239] * 
	* 
	W0108 22:05:26.814460  371585 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:05:26.815912  371585 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-878657 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (305.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-079759 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-079759 --alsologtostderr -v=3: exit status 82 (2m1.319883518s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-079759"  ...
	* Stopping node "old-k8s-version-079759"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:07:57.433493  374039 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:07:57.433680  374039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:07:57.433689  374039 out.go:309] Setting ErrFile to fd 2...
	I0108 22:07:57.433694  374039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:07:57.434009  374039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:07:57.434377  374039 out.go:303] Setting JSON to false
	I0108 22:07:57.434509  374039 mustload.go:65] Loading cluster: old-k8s-version-079759
	I0108 22:07:57.435109  374039 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:07:57.435222  374039 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/config.json ...
	I0108 22:07:57.435470  374039 mustload.go:65] Loading cluster: old-k8s-version-079759
	I0108 22:07:57.435653  374039 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:07:57.435700  374039 stop.go:39] StopHost: old-k8s-version-079759
	I0108 22:07:57.436144  374039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:07:57.436221  374039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:07:57.455260  374039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0108 22:07:57.456065  374039 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:07:57.456796  374039 main.go:141] libmachine: Using API Version  1
	I0108 22:07:57.456830  374039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:07:57.457356  374039 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:07:57.460743  374039 out.go:177] * Stopping node "old-k8s-version-079759"  ...
	I0108 22:07:57.462510  374039 main.go:141] libmachine: Stopping "old-k8s-version-079759"...
	I0108 22:07:57.462553  374039 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:07:57.464705  374039 main.go:141] libmachine: (old-k8s-version-079759) Calling .Stop
	I0108 22:07:57.468525  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 0/60
	I0108 22:07:58.470182  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 1/60
	I0108 22:07:59.472930  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 2/60
	I0108 22:08:00.475200  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 3/60
	I0108 22:08:01.477524  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 4/60
	I0108 22:08:02.480101  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 5/60
	I0108 22:08:03.482844  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 6/60
	I0108 22:08:04.484281  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 7/60
	I0108 22:08:05.486474  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 8/60
	I0108 22:08:06.488565  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 9/60
	I0108 22:08:07.490747  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 10/60
	I0108 22:08:08.492532  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 11/60
	I0108 22:08:09.494332  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 12/60
	I0108 22:08:10.496155  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 13/60
	I0108 22:08:11.498568  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 14/60
	I0108 22:08:12.500701  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 15/60
	I0108 22:08:13.502441  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 16/60
	I0108 22:08:14.504269  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 17/60
	I0108 22:08:15.505720  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 18/60
	I0108 22:08:16.508209  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 19/60
	I0108 22:08:17.509640  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 20/60
	I0108 22:08:18.511489  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 21/60
	I0108 22:08:19.513177  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 22/60
	I0108 22:08:20.514516  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 23/60
	I0108 22:08:21.516510  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 24/60
	I0108 22:08:22.519451  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 25/60
	I0108 22:08:23.521241  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 26/60
	I0108 22:08:24.522824  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 27/60
	I0108 22:08:25.524513  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 28/60
	I0108 22:08:26.526042  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 29/60
	I0108 22:08:27.528224  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 30/60
	I0108 22:08:28.530450  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 31/60
	I0108 22:08:29.532449  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 32/60
	I0108 22:08:30.534667  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 33/60
	I0108 22:08:31.536939  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 34/60
	I0108 22:08:32.538626  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 35/60
	I0108 22:08:33.540458  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 36/60
	I0108 22:08:34.541983  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 37/60
	I0108 22:08:35.544383  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 38/60
	I0108 22:08:36.546456  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 39/60
	I0108 22:08:37.548355  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 40/60
	I0108 22:08:38.549961  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 41/60
	I0108 22:08:39.551519  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 42/60
	I0108 22:08:40.552977  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 43/60
	I0108 22:08:41.555052  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 44/60
	I0108 22:08:42.557356  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 45/60
	I0108 22:08:43.559088  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 46/60
	I0108 22:08:44.560890  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 47/60
	I0108 22:08:45.562248  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 48/60
	I0108 22:08:46.563488  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 49/60
	I0108 22:08:47.564994  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 50/60
	I0108 22:08:48.566660  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 51/60
	I0108 22:08:49.568487  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 52/60
	I0108 22:08:50.570508  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 53/60
	I0108 22:08:51.572678  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 54/60
	I0108 22:08:52.574757  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 55/60
	I0108 22:08:53.576732  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 56/60
	I0108 22:08:54.578485  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 57/60
	I0108 22:08:55.579991  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 58/60
	I0108 22:08:56.581556  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 59/60
	I0108 22:08:57.582553  374039 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:08:57.582609  374039 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:08:57.582631  374039 retry.go:31] will retry after 935.741814ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:08:58.518702  374039 stop.go:39] StopHost: old-k8s-version-079759
	I0108 22:08:58.519160  374039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:08:58.519318  374039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:08:58.536080  374039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0108 22:08:58.536683  374039 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:08:58.537197  374039 main.go:141] libmachine: Using API Version  1
	I0108 22:08:58.537215  374039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:08:58.537599  374039 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:08:58.539786  374039 out.go:177] * Stopping node "old-k8s-version-079759"  ...
	I0108 22:08:58.541764  374039 main.go:141] libmachine: Stopping "old-k8s-version-079759"...
	I0108 22:08:58.541782  374039 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:08:58.543827  374039 main.go:141] libmachine: (old-k8s-version-079759) Calling .Stop
	I0108 22:08:58.547665  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 0/60
	I0108 22:08:59.549726  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 1/60
	I0108 22:09:00.551307  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 2/60
	I0108 22:09:01.552925  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 3/60
	I0108 22:09:02.554918  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 4/60
	I0108 22:09:03.557382  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 5/60
	I0108 22:09:04.559496  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 6/60
	I0108 22:09:05.560890  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 7/60
	I0108 22:09:06.562748  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 8/60
	I0108 22:09:07.564791  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 9/60
	I0108 22:09:08.566526  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 10/60
	I0108 22:09:09.569115  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 11/60
	I0108 22:09:10.570861  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 12/60
	I0108 22:09:11.572261  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 13/60
	I0108 22:09:12.573492  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 14/60
	I0108 22:09:13.575441  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 15/60
	I0108 22:09:14.576827  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 16/60
	I0108 22:09:15.578473  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 17/60
	I0108 22:09:16.580305  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 18/60
	I0108 22:09:17.581816  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 19/60
	I0108 22:09:18.583798  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 20/60
	I0108 22:09:19.585518  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 21/60
	I0108 22:09:20.587109  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 22/60
	I0108 22:09:21.588702  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 23/60
	I0108 22:09:22.590150  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 24/60
	I0108 22:09:23.592135  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 25/60
	I0108 22:09:24.593937  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 26/60
	I0108 22:09:25.595479  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 27/60
	I0108 22:09:26.597197  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 28/60
	I0108 22:09:27.598921  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 29/60
	I0108 22:09:28.601445  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 30/60
	I0108 22:09:29.603097  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 31/60
	I0108 22:09:30.605216  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 32/60
	I0108 22:09:31.607034  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 33/60
	I0108 22:09:32.608770  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 34/60
	I0108 22:09:33.610602  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 35/60
	I0108 22:09:34.612274  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 36/60
	I0108 22:09:35.613836  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 37/60
	I0108 22:09:36.615601  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 38/60
	I0108 22:09:37.616905  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 39/60
	I0108 22:09:38.619582  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 40/60
	I0108 22:09:39.621134  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 41/60
	I0108 22:09:40.623070  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 42/60
	I0108 22:09:41.624734  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 43/60
	I0108 22:09:42.626368  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 44/60
	I0108 22:09:43.628557  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 45/60
	I0108 22:09:44.630770  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 46/60
	I0108 22:09:45.632356  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 47/60
	I0108 22:09:46.633988  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 48/60
	I0108 22:09:47.635477  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 49/60
	I0108 22:09:48.637444  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 50/60
	I0108 22:09:49.639017  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 51/60
	I0108 22:09:50.640427  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 52/60
	I0108 22:09:51.642201  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 53/60
	I0108 22:09:52.644015  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 54/60
	I0108 22:09:53.646176  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 55/60
	I0108 22:09:54.647961  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 56/60
	I0108 22:09:55.649794  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 57/60
	I0108 22:09:56.651597  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 58/60
	I0108 22:09:57.653866  374039 main.go:141] libmachine: (old-k8s-version-079759) Waiting for machine to stop 59/60
	I0108 22:09:58.655096  374039 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:09:58.655189  374039 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:09:58.658702  374039 out.go:177] 
	W0108 22:09:58.660641  374039 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 22:09:58.660682  374039 out.go:239] * 
	* 
	W0108 22:09:58.664409  374039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:09:58.666066  374039 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-079759 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759: exit status 3 (18.648277373s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:10:17.315938  374688 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0108 22:10:17.315977  374688 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-079759" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-675668 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-675668 --alsologtostderr -v=3: exit status 82 (2m1.606710356s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-675668"  ...
	* Stopping node "no-preload-675668"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:08:32.261481  374279 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:08:32.261930  374279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:08:32.261947  374279 out.go:309] Setting ErrFile to fd 2...
	I0108 22:08:32.261955  374279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:08:32.262456  374279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:08:32.262877  374279 out.go:303] Setting JSON to false
	I0108 22:08:32.263000  374279 mustload.go:65] Loading cluster: no-preload-675668
	I0108 22:08:32.263546  374279 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:08:32.263670  374279 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/config.json ...
	I0108 22:08:32.263898  374279 mustload.go:65] Loading cluster: no-preload-675668
	I0108 22:08:32.264035  374279 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:08:32.264091  374279 stop.go:39] StopHost: no-preload-675668
	I0108 22:08:32.264673  374279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:08:32.264735  374279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:08:32.284771  374279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0108 22:08:32.286155  374279 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:08:32.287040  374279 main.go:141] libmachine: Using API Version  1
	I0108 22:08:32.287081  374279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:08:32.287698  374279 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:08:32.290436  374279 out.go:177] * Stopping node "no-preload-675668"  ...
	I0108 22:08:32.292274  374279 main.go:141] libmachine: Stopping "no-preload-675668"...
	I0108 22:08:32.292312  374279 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:08:32.294815  374279 main.go:141] libmachine: (no-preload-675668) Calling .Stop
	I0108 22:08:32.298776  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 0/60
	I0108 22:08:33.301045  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 1/60
	I0108 22:08:34.303339  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 2/60
	I0108 22:08:35.305468  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 3/60
	I0108 22:08:36.307638  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 4/60
	I0108 22:08:37.310150  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 5/60
	I0108 22:08:38.311840  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 6/60
	I0108 22:08:39.313877  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 7/60
	I0108 22:08:40.315884  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 8/60
	I0108 22:08:41.318328  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 9/60
	I0108 22:08:42.319481  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 10/60
	I0108 22:08:43.321072  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 11/60
	I0108 22:08:44.323004  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 12/60
	I0108 22:08:45.324869  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 13/60
	I0108 22:08:46.326104  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 14/60
	I0108 22:08:47.328174  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 15/60
	I0108 22:08:48.329694  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 16/60
	I0108 22:08:49.331619  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 17/60
	I0108 22:08:50.333313  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 18/60
	I0108 22:08:51.334870  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 19/60
	I0108 22:08:52.336712  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 20/60
	I0108 22:08:53.338321  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 21/60
	I0108 22:08:54.339786  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 22/60
	I0108 22:08:55.341385  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 23/60
	I0108 22:08:56.342884  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 24/60
	I0108 22:08:57.344477  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 25/60
	I0108 22:08:58.345876  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 26/60
	I0108 22:08:59.347518  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 27/60
	I0108 22:09:00.348937  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 28/60
	I0108 22:09:01.350217  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 29/60
	I0108 22:09:02.352465  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 30/60
	I0108 22:09:03.353936  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 31/60
	I0108 22:09:04.355925  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 32/60
	I0108 22:09:05.357442  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 33/60
	I0108 22:09:06.359525  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 34/60
	I0108 22:09:07.361814  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 35/60
	I0108 22:09:08.363172  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 36/60
	I0108 22:09:09.365098  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 37/60
	I0108 22:09:10.366730  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 38/60
	I0108 22:09:11.368499  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 39/60
	I0108 22:09:12.370676  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 40/60
	I0108 22:09:13.372167  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 41/60
	I0108 22:09:14.374186  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 42/60
	I0108 22:09:15.375902  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 43/60
	I0108 22:09:16.377801  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 44/60
	I0108 22:09:17.379974  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 45/60
	I0108 22:09:18.381466  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 46/60
	I0108 22:09:19.382998  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 47/60
	I0108 22:09:20.384321  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 48/60
	I0108 22:09:21.386133  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 49/60
	I0108 22:09:22.387815  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 50/60
	I0108 22:09:23.389132  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 51/60
	I0108 22:09:24.390713  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 52/60
	I0108 22:09:25.392187  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 53/60
	I0108 22:09:26.393807  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 54/60
	I0108 22:09:27.396346  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 55/60
	I0108 22:09:28.397907  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 56/60
	I0108 22:09:29.399769  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 57/60
	I0108 22:09:30.401548  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 58/60
	I0108 22:09:31.403160  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 59/60
	I0108 22:09:32.403875  374279 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:09:32.403960  374279 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:09:32.403996  374279 retry.go:31] will retry after 1.226150517s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:09:33.631445  374279 stop.go:39] StopHost: no-preload-675668
	I0108 22:09:33.631982  374279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:09:33.632049  374279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:09:33.649301  374279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40187
	I0108 22:09:33.649885  374279 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:09:33.650439  374279 main.go:141] libmachine: Using API Version  1
	I0108 22:09:33.650470  374279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:09:33.650818  374279 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:09:33.653056  374279 out.go:177] * Stopping node "no-preload-675668"  ...
	I0108 22:09:33.654590  374279 main.go:141] libmachine: Stopping "no-preload-675668"...
	I0108 22:09:33.654617  374279 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:09:33.656825  374279 main.go:141] libmachine: (no-preload-675668) Calling .Stop
	I0108 22:09:33.660739  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 0/60
	I0108 22:09:34.662386  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 1/60
	I0108 22:09:35.664069  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 2/60
	I0108 22:09:36.665671  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 3/60
	I0108 22:09:37.667610  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 4/60
	I0108 22:09:38.669598  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 5/60
	I0108 22:09:39.671398  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 6/60
	I0108 22:09:40.673117  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 7/60
	I0108 22:09:41.674715  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 8/60
	I0108 22:09:42.676413  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 9/60
	I0108 22:09:43.677882  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 10/60
	I0108 22:09:44.679721  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 11/60
	I0108 22:09:45.681268  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 12/60
	I0108 22:09:46.682781  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 13/60
	I0108 22:09:47.684394  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 14/60
	I0108 22:09:48.686585  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 15/60
	I0108 22:09:49.688477  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 16/60
	I0108 22:09:50.689979  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 17/60
	I0108 22:09:51.692045  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 18/60
	I0108 22:09:52.693810  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 19/60
	I0108 22:09:53.696299  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 20/60
	I0108 22:09:54.697690  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 21/60
	I0108 22:09:55.699510  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 22/60
	I0108 22:09:56.700761  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 23/60
	I0108 22:09:57.702209  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 24/60
	I0108 22:09:58.704884  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 25/60
	I0108 22:09:59.706442  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 26/60
	I0108 22:10:00.708513  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 27/60
	I0108 22:10:01.710273  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 28/60
	I0108 22:10:02.712048  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 29/60
	I0108 22:10:03.714027  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 30/60
	I0108 22:10:04.715642  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 31/60
	I0108 22:10:05.717475  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 32/60
	I0108 22:10:06.718959  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 33/60
	I0108 22:10:07.720810  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 34/60
	I0108 22:10:08.723026  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 35/60
	I0108 22:10:09.724490  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 36/60
	I0108 22:10:10.726079  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 37/60
	I0108 22:10:11.727630  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 38/60
	I0108 22:10:12.729268  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 39/60
	I0108 22:10:13.731803  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 40/60
	I0108 22:10:14.733392  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 41/60
	I0108 22:10:15.735072  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 42/60
	I0108 22:10:16.736905  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 43/60
	I0108 22:10:17.738462  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 44/60
	I0108 22:10:18.740744  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 45/60
	I0108 22:10:19.742606  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 46/60
	I0108 22:10:20.744365  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 47/60
	I0108 22:10:21.746127  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 48/60
	I0108 22:10:22.747964  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 49/60
	I0108 22:10:23.750039  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 50/60
	I0108 22:10:24.751780  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 51/60
	I0108 22:10:25.753151  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 52/60
	I0108 22:10:26.754929  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 53/60
	I0108 22:10:27.756588  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 54/60
	I0108 22:10:28.758955  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 55/60
	I0108 22:10:29.760751  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 56/60
	I0108 22:10:30.762378  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 57/60
	I0108 22:10:31.764286  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 58/60
	I0108 22:10:32.766016  374279 main.go:141] libmachine: (no-preload-675668) Waiting for machine to stop 59/60
	I0108 22:10:33.767235  374279 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:10:33.767293  374279 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:10:33.769726  374279 out.go:177] 
	W0108 22:10:33.771703  374279 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 22:10:33.771724  374279 out.go:239] * 
	* 
	W0108 22:10:33.774679  374279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:10:33.776463  374279 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-675668 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668: exit status 3 (18.608956638s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:10:52.387740  374931 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.153:22: connect: no route to host
	E0108 22:10:52.387783  374931 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.153:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-675668" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-903819 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-903819 --alsologtostderr -v=3: exit status 82 (2m0.929805984s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-903819"  ...
	* Stopping node "embed-certs-903819"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:08:40.776945  374368 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:08:40.777277  374368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:08:40.777288  374368 out.go:309] Setting ErrFile to fd 2...
	I0108 22:08:40.777296  374368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:08:40.777553  374368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:08:40.777878  374368 out.go:303] Setting JSON to false
	I0108 22:08:40.778001  374368 mustload.go:65] Loading cluster: embed-certs-903819
	I0108 22:08:40.778461  374368 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:08:40.778573  374368 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/config.json ...
	I0108 22:08:40.778790  374368 mustload.go:65] Loading cluster: embed-certs-903819
	I0108 22:08:40.778950  374368 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:08:40.778991  374368 stop.go:39] StopHost: embed-certs-903819
	I0108 22:08:40.779582  374368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:08:40.779648  374368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:08:40.797011  374368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I0108 22:08:40.797540  374368 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:08:40.798332  374368 main.go:141] libmachine: Using API Version  1
	I0108 22:08:40.798359  374368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:08:40.798846  374368 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:08:40.800723  374368 out.go:177] * Stopping node "embed-certs-903819"  ...
	I0108 22:08:40.802634  374368 main.go:141] libmachine: Stopping "embed-certs-903819"...
	I0108 22:08:40.802667  374368 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:08:40.805052  374368 main.go:141] libmachine: (embed-certs-903819) Calling .Stop
	I0108 22:08:40.809454  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 0/60
	I0108 22:08:41.811253  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 1/60
	I0108 22:08:42.813041  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 2/60
	I0108 22:08:43.814507  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 3/60
	I0108 22:08:44.816324  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 4/60
	I0108 22:08:45.818676  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 5/60
	I0108 22:08:46.820276  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 6/60
	I0108 22:08:47.822154  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 7/60
	I0108 22:08:48.823665  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 8/60
	I0108 22:08:49.825305  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 9/60
	I0108 22:08:50.827517  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 10/60
	I0108 22:08:51.829364  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 11/60
	I0108 22:08:52.830937  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 12/60
	I0108 22:08:53.832576  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 13/60
	I0108 22:08:54.834321  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 14/60
	I0108 22:08:55.836704  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 15/60
	I0108 22:08:56.838105  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 16/60
	I0108 22:08:57.840248  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 17/60
	I0108 22:08:58.842025  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 18/60
	I0108 22:08:59.844166  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 19/60
	I0108 22:09:00.846212  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 20/60
	I0108 22:09:01.848039  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 21/60
	I0108 22:09:02.849400  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 22/60
	I0108 22:09:03.851141  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 23/60
	I0108 22:09:04.852765  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 24/60
	I0108 22:09:05.855371  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 25/60
	I0108 22:09:06.856823  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 26/60
	I0108 22:09:07.858333  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 27/60
	I0108 22:09:08.860129  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 28/60
	I0108 22:09:09.861626  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 29/60
	I0108 22:09:10.864026  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 30/60
	I0108 22:09:11.866721  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 31/60
	I0108 22:09:12.868656  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 32/60
	I0108 22:09:13.870234  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 33/60
	I0108 22:09:14.871620  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 34/60
	I0108 22:09:15.874061  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 35/60
	I0108 22:09:16.875530  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 36/60
	I0108 22:09:17.876913  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 37/60
	I0108 22:09:18.878412  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 38/60
	I0108 22:09:19.880086  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 39/60
	I0108 22:09:20.881626  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 40/60
	I0108 22:09:21.883452  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 41/60
	I0108 22:09:22.884929  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 42/60
	I0108 22:09:23.886194  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 43/60
	I0108 22:09:24.887631  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 44/60
	I0108 22:09:25.890348  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 45/60
	I0108 22:09:26.892054  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 46/60
	I0108 22:09:27.894093  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 47/60
	I0108 22:09:28.895820  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 48/60
	I0108 22:09:29.897566  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 49/60
	I0108 22:09:30.899173  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 50/60
	I0108 22:09:31.900994  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 51/60
	I0108 22:09:32.902398  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 52/60
	I0108 22:09:33.904408  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 53/60
	I0108 22:09:34.905813  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 54/60
	I0108 22:09:35.908177  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 55/60
	I0108 22:09:36.910080  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 56/60
	I0108 22:09:37.911333  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 57/60
	I0108 22:09:38.912895  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 58/60
	I0108 22:09:39.914560  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 59/60
	I0108 22:09:40.915292  374368 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:09:40.915400  374368 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:09:40.915432  374368 retry.go:31] will retry after 573.806828ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:09:41.490292  374368 stop.go:39] StopHost: embed-certs-903819
	I0108 22:09:41.490740  374368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:09:41.490791  374368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:09:41.506747  374368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0108 22:09:41.507279  374368 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:09:41.507793  374368 main.go:141] libmachine: Using API Version  1
	I0108 22:09:41.507811  374368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:09:41.508192  374368 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:09:41.510644  374368 out.go:177] * Stopping node "embed-certs-903819"  ...
	I0108 22:09:41.511997  374368 main.go:141] libmachine: Stopping "embed-certs-903819"...
	I0108 22:09:41.512033  374368 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:09:41.513995  374368 main.go:141] libmachine: (embed-certs-903819) Calling .Stop
	I0108 22:09:41.517762  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 0/60
	I0108 22:09:42.519294  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 1/60
	I0108 22:09:43.520870  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 2/60
	I0108 22:09:44.522486  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 3/60
	I0108 22:09:45.524253  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 4/60
	I0108 22:09:46.526345  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 5/60
	I0108 22:09:47.528076  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 6/60
	I0108 22:09:48.529838  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 7/60
	I0108 22:09:49.531707  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 8/60
	I0108 22:09:50.533872  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 9/60
	I0108 22:09:51.536110  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 10/60
	I0108 22:09:52.537807  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 11/60
	I0108 22:09:53.540130  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 12/60
	I0108 22:09:54.541726  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 13/60
	I0108 22:09:55.543186  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 14/60
	I0108 22:09:56.545808  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 15/60
	I0108 22:09:57.547238  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 16/60
	I0108 22:09:58.548684  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 17/60
	I0108 22:09:59.550169  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 18/60
	I0108 22:10:00.552018  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 19/60
	I0108 22:10:01.554195  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 20/60
	I0108 22:10:02.555863  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 21/60
	I0108 22:10:03.557298  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 22/60
	I0108 22:10:04.559045  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 23/60
	I0108 22:10:05.560634  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 24/60
	I0108 22:10:06.563092  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 25/60
	I0108 22:10:07.564825  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 26/60
	I0108 22:10:08.566112  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 27/60
	I0108 22:10:09.567540  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 28/60
	I0108 22:10:10.570276  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 29/60
	I0108 22:10:11.572393  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 30/60
	I0108 22:10:12.574793  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 31/60
	I0108 22:10:13.577353  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 32/60
	I0108 22:10:14.579034  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 33/60
	I0108 22:10:15.580921  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 34/60
	I0108 22:10:16.583258  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 35/60
	I0108 22:10:17.585084  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 36/60
	I0108 22:10:18.586999  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 37/60
	I0108 22:10:19.588706  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 38/60
	I0108 22:10:20.590079  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 39/60
	I0108 22:10:21.592488  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 40/60
	I0108 22:10:22.594329  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 41/60
	I0108 22:10:23.595520  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 42/60
	I0108 22:10:24.597166  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 43/60
	I0108 22:10:25.598830  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 44/60
	I0108 22:10:26.601108  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 45/60
	I0108 22:10:27.603160  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 46/60
	I0108 22:10:28.604823  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 47/60
	I0108 22:10:29.606463  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 48/60
	I0108 22:10:30.608210  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 49/60
	I0108 22:10:31.610344  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 50/60
	I0108 22:10:32.612236  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 51/60
	I0108 22:10:33.613679  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 52/60
	I0108 22:10:34.615562  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 53/60
	I0108 22:10:35.617225  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 54/60
	I0108 22:10:36.619324  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 55/60
	I0108 22:10:37.621026  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 56/60
	I0108 22:10:38.622691  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 57/60
	I0108 22:10:39.624402  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 58/60
	I0108 22:10:40.626345  374368 main.go:141] libmachine: (embed-certs-903819) Waiting for machine to stop 59/60
	I0108 22:10:41.627132  374368 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:10:41.627235  374368 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:10:41.629646  374368 out.go:177] 
	W0108 22:10:41.631541  374368 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 22:10:41.631559  374368 out.go:239] * 
	* 
	W0108 22:10:41.634840  374368 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:10:41.637412  374368 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-903819 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819: exit status 3 (18.684957465s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:11:00.323900  374983 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host
	E0108 22:11:00.323938  374983 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-903819" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-292054 --alsologtostderr -v=3
E0108 22:09:28.014609  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:09:44.964569  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:09:56.855107  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-292054 --alsologtostderr -v=3: exit status 82 (2m1.44253472s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-292054"  ...
	* Stopping node "default-k8s-diff-port-292054"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:09:13.617948  374534 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:09:13.618079  374534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:09:13.618088  374534 out.go:309] Setting ErrFile to fd 2...
	I0108 22:09:13.618092  374534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:09:13.618273  374534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:09:13.618540  374534 out.go:303] Setting JSON to false
	I0108 22:09:13.618630  374534 mustload.go:65] Loading cluster: default-k8s-diff-port-292054
	I0108 22:09:13.619029  374534 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:09:13.619104  374534 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:09:13.619274  374534 mustload.go:65] Loading cluster: default-k8s-diff-port-292054
	I0108 22:09:13.619414  374534 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:09:13.619447  374534 stop.go:39] StopHost: default-k8s-diff-port-292054
	I0108 22:09:13.619945  374534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:09:13.619997  374534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:09:13.636289  374534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0108 22:09:13.636873  374534 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:09:13.637456  374534 main.go:141] libmachine: Using API Version  1
	I0108 22:09:13.637484  374534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:09:13.637903  374534 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:09:13.640816  374534 out.go:177] * Stopping node "default-k8s-diff-port-292054"  ...
	I0108 22:09:13.642740  374534 main.go:141] libmachine: Stopping "default-k8s-diff-port-292054"...
	I0108 22:09:13.642788  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:09:13.644974  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Stop
	I0108 22:09:13.649395  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 0/60
	I0108 22:09:14.650855  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 1/60
	I0108 22:09:15.652573  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 2/60
	I0108 22:09:16.654177  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 3/60
	I0108 22:09:17.655975  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 4/60
	I0108 22:09:18.658478  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 5/60
	I0108 22:09:19.660017  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 6/60
	I0108 22:09:20.661362  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 7/60
	I0108 22:09:21.663169  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 8/60
	I0108 22:09:22.664746  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 9/60
	I0108 22:09:23.666546  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 10/60
	I0108 22:09:24.668224  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 11/60
	I0108 22:09:25.669730  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 12/60
	I0108 22:09:26.671530  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 13/60
	I0108 22:09:27.672865  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 14/60
	I0108 22:09:28.674598  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 15/60
	I0108 22:09:29.676340  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 16/60
	I0108 22:09:30.678309  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 17/60
	I0108 22:09:31.680150  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 18/60
	I0108 22:09:32.681856  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 19/60
	I0108 22:09:33.683124  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 20/60
	I0108 22:09:34.684778  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 21/60
	I0108 22:09:35.686169  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 22/60
	I0108 22:09:36.687528  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 23/60
	I0108 22:09:37.688921  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 24/60
	I0108 22:09:38.690708  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 25/60
	I0108 22:09:39.692369  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 26/60
	I0108 22:09:40.693960  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 27/60
	I0108 22:09:41.695509  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 28/60
	I0108 22:09:42.697041  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 29/60
	I0108 22:09:43.698660  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 30/60
	I0108 22:09:44.699932  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 31/60
	I0108 22:09:45.702256  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 32/60
	I0108 22:09:46.703459  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 33/60
	I0108 22:09:47.704711  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 34/60
	I0108 22:09:48.706738  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 35/60
	I0108 22:09:49.708449  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 36/60
	I0108 22:09:50.709828  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 37/60
	I0108 22:09:51.711463  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 38/60
	I0108 22:09:52.712773  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 39/60
	I0108 22:09:53.714158  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 40/60
	I0108 22:09:54.715473  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 41/60
	I0108 22:09:55.717373  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 42/60
	I0108 22:09:56.718579  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 43/60
	I0108 22:09:57.720044  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 44/60
	I0108 22:09:58.722184  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 45/60
	I0108 22:09:59.723673  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 46/60
	I0108 22:10:00.725059  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 47/60
	I0108 22:10:01.726581  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 48/60
	I0108 22:10:02.728157  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 49/60
	I0108 22:10:03.730683  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 50/60
	I0108 22:10:04.732505  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 51/60
	I0108 22:10:05.734029  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 52/60
	I0108 22:10:06.735309  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 53/60
	I0108 22:10:07.736790  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 54/60
	I0108 22:10:08.738815  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 55/60
	I0108 22:10:09.740551  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 56/60
	I0108 22:10:10.741815  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 57/60
	I0108 22:10:11.743263  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 58/60
	I0108 22:10:12.744658  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 59/60
	I0108 22:10:13.746009  374534 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:10:13.746088  374534 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:10:13.746123  374534 retry.go:31] will retry after 1.096708666s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:10:14.843446  374534 stop.go:39] StopHost: default-k8s-diff-port-292054
	I0108 22:10:14.843836  374534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:10:14.843896  374534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:10:14.860729  374534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0108 22:10:14.861353  374534 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:10:14.862090  374534 main.go:141] libmachine: Using API Version  1
	I0108 22:10:14.862138  374534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:10:14.862630  374534 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:10:14.865312  374534 out.go:177] * Stopping node "default-k8s-diff-port-292054"  ...
	I0108 22:10:14.867069  374534 main.go:141] libmachine: Stopping "default-k8s-diff-port-292054"...
	I0108 22:10:14.867092  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:10:14.869165  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Stop
	I0108 22:10:14.872521  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 0/60
	I0108 22:10:15.874088  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 1/60
	I0108 22:10:16.876442  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 2/60
	I0108 22:10:17.878021  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 3/60
	I0108 22:10:18.879411  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 4/60
	I0108 22:10:19.881637  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 5/60
	I0108 22:10:20.883244  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 6/60
	I0108 22:10:21.885475  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 7/60
	I0108 22:10:22.887051  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 8/60
	I0108 22:10:23.888766  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 9/60
	I0108 22:10:24.891050  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 10/60
	I0108 22:10:25.892386  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 11/60
	I0108 22:10:26.894349  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 12/60
	I0108 22:10:27.896113  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 13/60
	I0108 22:10:28.898126  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 14/60
	I0108 22:10:29.900471  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 15/60
	I0108 22:10:30.901978  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 16/60
	I0108 22:10:31.903820  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 17/60
	I0108 22:10:32.905511  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 18/60
	I0108 22:10:33.906673  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 19/60
	I0108 22:10:34.908619  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 20/60
	I0108 22:10:35.910146  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 21/60
	I0108 22:10:36.911957  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 22/60
	I0108 22:10:37.913598  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 23/60
	I0108 22:10:38.915034  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 24/60
	I0108 22:10:39.917452  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 25/60
	I0108 22:10:40.919350  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 26/60
	I0108 22:10:41.921427  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 27/60
	I0108 22:10:42.923273  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 28/60
	I0108 22:10:43.925469  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 29/60
	I0108 22:10:44.927875  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 30/60
	I0108 22:10:45.929573  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 31/60
	I0108 22:10:46.930993  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 32/60
	I0108 22:10:47.932610  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 33/60
	I0108 22:10:48.934023  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 34/60
	I0108 22:10:49.936279  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 35/60
	I0108 22:10:50.938133  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 36/60
	I0108 22:10:51.939908  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 37/60
	I0108 22:10:52.941413  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 38/60
	I0108 22:10:53.943519  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 39/60
	I0108 22:10:54.945515  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 40/60
	I0108 22:10:55.947334  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 41/60
	I0108 22:10:56.949004  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 42/60
	I0108 22:10:57.950627  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 43/60
	I0108 22:10:58.952491  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 44/60
	I0108 22:10:59.954788  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 45/60
	I0108 22:11:00.956783  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 46/60
	I0108 22:11:01.958416  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 47/60
	I0108 22:11:02.960351  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 48/60
	I0108 22:11:03.962086  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 49/60
	I0108 22:11:04.963919  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 50/60
	I0108 22:11:05.965741  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 51/60
	I0108 22:11:06.967336  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 52/60
	I0108 22:11:07.968990  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 53/60
	I0108 22:11:08.970594  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 54/60
	I0108 22:11:09.972686  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 55/60
	I0108 22:11:10.974520  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 56/60
	I0108 22:11:11.976246  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 57/60
	I0108 22:11:12.977741  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 58/60
	I0108 22:11:13.979462  374534 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for machine to stop 59/60
	I0108 22:11:14.980532  374534 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:11:14.980587  374534 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:11:14.982909  374534 out.go:177] 
	W0108 22:11:14.984472  374534 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 22:11:14.984488  374534 out.go:239] * 
	* 
	W0108 22:11:14.987396  374534 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:11:14.988893  374534 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-292054 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054: exit status 3 (18.61195217s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:11:33.603805  375327 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.18:22: connect: no route to host
	E0108 22:11:33.603833  375327 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.18:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-292054" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759: exit status 3 (3.200355948s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:10:20.515899  374780 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0108 22:10:20.515937  374780 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-079759 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-079759 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155345353s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-079759 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759: exit status 3 (3.059491419s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:10:29.731953  374850 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0108 22:10:29.731976  374850 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-079759" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668: exit status 3 (3.200453166s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:10:55.587881  375035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.153:22: connect: no route to host
	E0108 22:10:55.587916  375035 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.153:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-675668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-675668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155789298s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.153:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-675668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668: exit status 3 (3.060456947s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:11:04.803963  375157 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.153:22: connect: no route to host
	E0108 22:11:04.803987  375157 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.153:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-675668" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819: exit status 3 (3.199931256s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:11:03.523933  375117 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host
	E0108 22:11:03.523976  375117 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-903819 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-903819 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154270569s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-903819 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819: exit status 3 (3.060580396s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:11:12.739826  375251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host
	E0108 22:11:12.739856  375251 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.132:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-903819" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054: exit status 3 (3.200001255s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:11:36.803849  375402 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.18:22: connect: no route to host
	E0108 22:11:36.803879  375402 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.18:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-292054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-292054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157794005s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-292054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054: exit status 3 (3.058268593s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:11:46.019823  375496 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.18:22: connect: no route to host
	E0108 22:11:46.019847  375496 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.18:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-292054" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 22:17:44.574354  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 22:19:44.964664  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:19:56.854083  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079759 -n old-k8s-version-079759
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-08 22:26:21.326148406 +0000 UTC m=+5057.027270762
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-079759 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-079759 logs -n 25: (1.959138078s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-523607                              | cert-expiration-523607       | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343954 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | disable-driver-mounts-343954                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:09 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079759        | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC | 08 Jan 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-675668             | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-903819            | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-292054  | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC | 08 Jan 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079759             | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-675668                  | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-903819                 | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-292054       | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:11:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:11:46.087099  375556 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:11:46.087257  375556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:46.087268  375556 out.go:309] Setting ErrFile to fd 2...
	I0108 22:11:46.087273  375556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:46.087523  375556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:11:46.088153  375556 out.go:303] Setting JSON to false
	I0108 22:11:46.089299  375556 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10432,"bootTime":1704741474,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:11:46.089374  375556 start.go:138] virtualization: kvm guest
	I0108 22:11:46.092180  375556 out.go:177] * [default-k8s-diff-port-292054] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:11:46.093649  375556 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:11:46.093727  375556 notify.go:220] Checking for updates...
	I0108 22:11:46.095251  375556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:11:46.097142  375556 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:11:46.099048  375556 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:11:46.100864  375556 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:11:46.102762  375556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:11:46.105085  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:11:46.105575  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:11:46.105654  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:11:46.122253  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0108 22:11:46.122758  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:11:46.123342  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:11:46.123412  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:11:46.123752  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:11:46.123910  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:11:46.124157  375556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:11:46.124499  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:11:46.124539  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:11:46.140751  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0108 22:11:46.141282  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:11:46.141773  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:11:46.141798  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:11:46.142141  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:11:46.142444  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:11:46.184643  375556 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 22:11:46.186001  375556 start.go:298] selected driver: kvm2
	I0108 22:11:46.186020  375556 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:11:46.186148  375556 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:11:46.186947  375556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:46.187023  375556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:11:46.203781  375556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:11:46.204243  375556 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:11:46.204341  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:11:46.204355  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:11:46.204368  375556 start_flags.go:321] config:
	{Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-29205
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:11:46.204574  375556 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:46.206922  375556 out.go:177] * Starting control plane node default-k8s-diff-port-292054 in cluster default-k8s-diff-port-292054
	I0108 22:11:49.059974  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:11:46.208771  375556 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:11:46.208837  375556 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:11:46.208846  375556 cache.go:56] Caching tarball of preloaded images
	I0108 22:11:46.208953  375556 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:11:46.208964  375556 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:11:46.209090  375556 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:11:46.209292  375556 start.go:365] acquiring machines lock for default-k8s-diff-port-292054: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:11:52.131718  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:11:58.211727  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:01.283728  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:07.363651  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:10.435843  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:16.515718  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:19.587893  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:25.667716  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:28.739741  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:34.819670  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:37.891747  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:43.971702  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:47.043706  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:53.123662  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:56.195726  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:02.275699  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:05.347708  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:11.427670  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:14.499733  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:20.579716  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:23.651809  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:29.731813  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:32.803834  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:38.883645  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:41.955722  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:48.035781  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:51.107833  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:57.187725  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:00.259743  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:06.339763  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:09.411776  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:15.491797  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:18.563880  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:24.643806  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:27.715717  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:33.795783  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:36.867725  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:42.947651  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:46.019719  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:52.099719  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:55.171662  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:01.251699  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:04.323666  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:07.328244  375205 start.go:369] acquired machines lock for "no-preload-675668" in 4m2.333038111s
	I0108 22:15:07.328384  375205 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:07.328398  375205 fix.go:54] fixHost starting: 
	I0108 22:15:07.328972  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:07.329012  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:07.346002  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0108 22:15:07.346606  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:07.347087  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:15:07.347112  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:07.347614  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:07.347816  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:07.347977  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:15:07.349843  375205 fix.go:102] recreateIfNeeded on no-preload-675668: state=Stopped err=<nil>
	I0108 22:15:07.349873  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	W0108 22:15:07.350055  375205 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:07.352092  375205 out.go:177] * Restarting existing kvm2 VM for "no-preload-675668" ...
	I0108 22:15:07.325708  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:07.325751  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:15:07.327981  374880 machine.go:91] provisioned docker machine in 4m37.376179376s
	I0108 22:15:07.328067  374880 fix.go:56] fixHost completed within 4m37.402208453s
	I0108 22:15:07.328080  374880 start.go:83] releasing machines lock for "old-k8s-version-079759", held for 4m37.402236557s
	W0108 22:15:07.328149  374880 start.go:694] error starting host: provision: host is not running
	W0108 22:15:07.328386  374880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 22:15:07.328401  374880 start.go:709] Will try again in 5 seconds ...
	I0108 22:15:07.353648  375205 main.go:141] libmachine: (no-preload-675668) Calling .Start
	I0108 22:15:07.353904  375205 main.go:141] libmachine: (no-preload-675668) Ensuring networks are active...
	I0108 22:15:07.354917  375205 main.go:141] libmachine: (no-preload-675668) Ensuring network default is active
	I0108 22:15:07.355390  375205 main.go:141] libmachine: (no-preload-675668) Ensuring network mk-no-preload-675668 is active
	I0108 22:15:07.355764  375205 main.go:141] libmachine: (no-preload-675668) Getting domain xml...
	I0108 22:15:07.356506  375205 main.go:141] libmachine: (no-preload-675668) Creating domain...
	I0108 22:15:08.673735  375205 main.go:141] libmachine: (no-preload-675668) Waiting to get IP...
	I0108 22:15:08.674861  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:08.675407  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:08.675502  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:08.675369  376073 retry.go:31] will retry after 298.445271ms: waiting for machine to come up
	I0108 22:15:08.976053  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:08.976594  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:08.976624  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:08.976525  376073 retry.go:31] will retry after 372.862343ms: waiting for machine to come up
	I0108 22:15:09.351338  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:09.351843  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:09.351864  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:09.351801  376073 retry.go:31] will retry after 463.145179ms: waiting for machine to come up
	I0108 22:15:09.816629  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:09.817035  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:09.817059  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:09.816979  376073 retry.go:31] will retry after 390.229237ms: waiting for machine to come up
	I0108 22:15:12.328668  374880 start.go:365] acquiring machines lock for old-k8s-version-079759: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:15:10.208639  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:10.209034  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:10.209068  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:10.208972  376073 retry.go:31] will retry after 547.133251ms: waiting for machine to come up
	I0108 22:15:10.758143  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:10.758742  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:10.758779  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:10.758673  376073 retry.go:31] will retry after 833.304996ms: waiting for machine to come up
	I0108 22:15:11.594018  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:11.594517  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:11.594551  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:11.594482  376073 retry.go:31] will retry after 1.155542967s: waiting for machine to come up
	I0108 22:15:12.751694  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:12.752196  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:12.752233  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:12.752162  376073 retry.go:31] will retry after 1.197873107s: waiting for machine to come up
	I0108 22:15:13.951593  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:13.952050  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:13.952072  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:13.952005  376073 retry.go:31] will retry after 1.257059014s: waiting for machine to come up
	I0108 22:15:15.211632  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:15.212133  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:15.212161  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:15.212090  376073 retry.go:31] will retry after 2.27321783s: waiting for machine to come up
	I0108 22:15:17.487177  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:17.487684  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:17.487712  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:17.487631  376073 retry.go:31] will retry after 2.218202362s: waiting for machine to come up
	I0108 22:15:19.709130  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:19.709618  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:19.709651  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:19.709552  376073 retry.go:31] will retry after 2.976711307s: waiting for machine to come up
	I0108 22:15:22.687741  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:22.688337  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:22.688373  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:22.688238  376073 retry.go:31] will retry after 4.028238242s: waiting for machine to come up
	I0108 22:15:28.088862  375293 start.go:369] acquired machines lock for "embed-certs-903819" in 4m15.164556555s
	I0108 22:15:28.088954  375293 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:28.088965  375293 fix.go:54] fixHost starting: 
	I0108 22:15:28.089472  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:28.089526  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:28.108636  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0108 22:15:28.109141  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:28.109765  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:15:28.109816  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:28.110214  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:28.110458  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:28.110642  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:15:28.112595  375293 fix.go:102] recreateIfNeeded on embed-certs-903819: state=Stopped err=<nil>
	I0108 22:15:28.112635  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	W0108 22:15:28.112883  375293 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:28.115226  375293 out.go:177] * Restarting existing kvm2 VM for "embed-certs-903819" ...
	I0108 22:15:26.721451  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.721880  375205 main.go:141] libmachine: (no-preload-675668) Found IP for machine: 192.168.61.153
	I0108 22:15:26.721905  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has current primary IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.721912  375205 main.go:141] libmachine: (no-preload-675668) Reserving static IP address...
	I0108 22:15:26.722449  375205 main.go:141] libmachine: (no-preload-675668) Reserved static IP address: 192.168.61.153
	I0108 22:15:26.722475  375205 main.go:141] libmachine: (no-preload-675668) Waiting for SSH to be available...
	I0108 22:15:26.722498  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "no-preload-675668", mac: "52:54:00:08:3b:59", ip: "192.168.61.153"} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.722528  375205 main.go:141] libmachine: (no-preload-675668) DBG | skip adding static IP to network mk-no-preload-675668 - found existing host DHCP lease matching {name: "no-preload-675668", mac: "52:54:00:08:3b:59", ip: "192.168.61.153"}
	I0108 22:15:26.722545  375205 main.go:141] libmachine: (no-preload-675668) DBG | Getting to WaitForSSH function...
	I0108 22:15:26.724512  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.724861  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.724898  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.725004  375205 main.go:141] libmachine: (no-preload-675668) DBG | Using SSH client type: external
	I0108 22:15:26.725078  375205 main.go:141] libmachine: (no-preload-675668) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa (-rw-------)
	I0108 22:15:26.725130  375205 main.go:141] libmachine: (no-preload-675668) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:15:26.725152  375205 main.go:141] libmachine: (no-preload-675668) DBG | About to run SSH command:
	I0108 22:15:26.725172  375205 main.go:141] libmachine: (no-preload-675668) DBG | exit 0
	I0108 22:15:26.815569  375205 main.go:141] libmachine: (no-preload-675668) DBG | SSH cmd err, output: <nil>: 
	I0108 22:15:26.816005  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetConfigRaw
	I0108 22:15:26.816711  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:26.819269  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.819636  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.819681  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.819964  375205 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/config.json ...
	I0108 22:15:26.820191  375205 machine.go:88] provisioning docker machine ...
	I0108 22:15:26.820215  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:26.820446  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:26.820626  375205 buildroot.go:166] provisioning hostname "no-preload-675668"
	I0108 22:15:26.820648  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:26.820790  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:26.823021  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.823390  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.823421  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.823567  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:26.823781  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.823943  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.824103  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:26.824331  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:26.824924  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:26.824958  375205 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-675668 && echo "no-preload-675668" | sudo tee /etc/hostname
	I0108 22:15:26.960664  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-675668
	
	I0108 22:15:26.960713  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:26.964110  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.964397  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.964437  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.964605  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:26.964918  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.965153  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.965334  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:26.965543  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:26.965958  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:26.965985  375205 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-675668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-675668/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-675668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:15:27.102584  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:27.102632  375205 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:15:27.102663  375205 buildroot.go:174] setting up certificates
	I0108 22:15:27.102678  375205 provision.go:83] configureAuth start
	I0108 22:15:27.102688  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:27.103024  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:27.105986  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.106379  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.106400  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.106586  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.108670  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.109003  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.109029  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.109216  375205 provision.go:138] copyHostCerts
	I0108 22:15:27.109300  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:15:27.109320  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:15:27.109426  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:15:27.109561  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:15:27.109571  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:15:27.109599  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:15:27.109663  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:15:27.109670  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:15:27.109691  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:15:27.109751  375205 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.no-preload-675668 san=[192.168.61.153 192.168.61.153 localhost 127.0.0.1 minikube no-preload-675668]
	I0108 22:15:27.297801  375205 provision.go:172] copyRemoteCerts
	I0108 22:15:27.297888  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:15:27.297915  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.301050  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.301503  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.301545  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.301737  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.301955  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.302121  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.302265  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:27.394076  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:15:27.420873  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:15:27.446852  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:15:27.475352  375205 provision.go:86] duration metric: configureAuth took 372.6598ms
	I0108 22:15:27.475406  375205 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:15:27.475661  375205 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:15:27.475793  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.478557  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.478872  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.478906  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.479091  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.479354  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.479579  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.479768  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.479939  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:27.480273  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:27.480291  375205 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:15:27.822802  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:15:27.822834  375205 machine.go:91] provisioned docker machine in 1.002628424s
	I0108 22:15:27.822845  375205 start.go:300] post-start starting for "no-preload-675668" (driver="kvm2")
	I0108 22:15:27.822858  375205 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:15:27.822874  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:27.823282  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:15:27.823320  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.825948  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.826276  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.826298  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.826407  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.826597  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.826793  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.826922  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:27.918118  375205 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:15:27.922998  375205 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:15:27.923044  375205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:15:27.923151  375205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:15:27.923275  375205 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:15:27.923407  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:15:27.933715  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:27.960061  375205 start.go:303] post-start completed in 137.19795ms
	I0108 22:15:27.960109  375205 fix.go:56] fixHost completed within 20.631710493s
	I0108 22:15:27.960137  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.963254  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.963656  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.963688  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.964017  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.964325  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.964533  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.964722  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.964945  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:27.965301  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:27.965314  375205 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:15:28.088665  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752128.028688224
	
	I0108 22:15:28.088696  375205 fix.go:206] guest clock: 1704752128.028688224
	I0108 22:15:28.088706  375205 fix.go:219] Guest: 2024-01-08 22:15:28.028688224 +0000 UTC Remote: 2024-01-08 22:15:27.960113957 +0000 UTC m=+263.145626296 (delta=68.574267ms)
	I0108 22:15:28.088734  375205 fix.go:190] guest clock delta is within tolerance: 68.574267ms
	I0108 22:15:28.088742  375205 start.go:83] releasing machines lock for "no-preload-675668", held for 20.760456272s
	I0108 22:15:28.088775  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.089136  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:28.091887  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.092255  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.092274  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.092537  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093187  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093416  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093504  375205 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:15:28.093546  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:28.093722  375205 ssh_runner.go:195] Run: cat /version.json
	I0108 22:15:28.093769  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:28.096920  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.096969  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097385  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.097428  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097460  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.097482  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097739  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:28.097767  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:28.098020  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:28.098074  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:28.098243  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:28.098254  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:28.098459  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:28.098460  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:28.221319  375205 ssh_runner.go:195] Run: systemctl --version
	I0108 22:15:28.227501  375205 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:15:28.379259  375205 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:15:28.386159  375205 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:15:28.386272  375205 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:15:28.404416  375205 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:15:28.404469  375205 start.go:475] detecting cgroup driver to use...
	I0108 22:15:28.404575  375205 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:15:28.421612  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:15:28.438920  375205 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:15:28.439001  375205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:15:28.455220  375205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:15:28.473982  375205 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:15:28.610132  375205 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:15:28.735485  375205 docker.go:219] disabling docker service ...
	I0108 22:15:28.735627  375205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:15:28.750327  375205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:15:28.768782  375205 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:15:28.891784  375205 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:15:29.006680  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:15:29.023187  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:15:29.043520  375205 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:15:29.043601  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.056442  375205 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:15:29.056525  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.066874  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.077969  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.090310  375205 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:15:29.102253  375205 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:15:29.114920  375205 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:15:29.115022  375205 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:15:29.131677  375205 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:15:29.142326  375205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:15:29.259562  375205 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:15:29.463482  375205 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:15:29.463554  375205 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:15:29.468579  375205 start.go:543] Will wait 60s for crictl version
	I0108 22:15:29.468665  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:29.476630  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:15:29.525900  375205 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:15:29.526053  375205 ssh_runner.go:195] Run: crio --version
	I0108 22:15:29.579948  375205 ssh_runner.go:195] Run: crio --version
	I0108 22:15:29.632573  375205 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0108 22:15:29.634161  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:29.637972  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:29.638472  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:29.638514  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:29.638828  375205 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0108 22:15:29.644170  375205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:29.658242  375205 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:15:29.658302  375205 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:29.701366  375205 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0108 22:15:29.701422  375205 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:15:29.701626  375205 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0108 22:15:29.701685  375205 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.701583  375205 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.701743  375205 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.701674  375205 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.701597  375205 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:29.701743  375205 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.701582  375205 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.703644  375205 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:29.703679  375205 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.703705  375205 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0108 22:15:29.703722  375205 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.703643  375205 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.703651  375205 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.703655  375205 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.703652  375205 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:28.117212  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Start
	I0108 22:15:28.117480  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring networks are active...
	I0108 22:15:28.118363  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring network default is active
	I0108 22:15:28.118783  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring network mk-embed-certs-903819 is active
	I0108 22:15:28.119425  375293 main.go:141] libmachine: (embed-certs-903819) Getting domain xml...
	I0108 22:15:28.120203  375293 main.go:141] libmachine: (embed-certs-903819) Creating domain...
	I0108 22:15:29.474037  375293 main.go:141] libmachine: (embed-certs-903819) Waiting to get IP...
	I0108 22:15:29.475109  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:29.475735  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:29.475862  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:29.475696  376188 retry.go:31] will retry after 284.136631ms: waiting for machine to come up
	I0108 22:15:29.762077  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:29.762586  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:29.762614  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:29.762538  376188 retry.go:31] will retry after 303.052805ms: waiting for machine to come up
	I0108 22:15:30.067299  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:30.067947  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:30.067997  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:30.067822  376188 retry.go:31] will retry after 471.679894ms: waiting for machine to come up
	I0108 22:15:30.541942  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:30.542626  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:30.542658  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:30.542542  376188 retry.go:31] will retry after 534.448155ms: waiting for machine to come up
	I0108 22:15:31.078549  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:31.079168  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:31.079212  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:31.079092  376188 retry.go:31] will retry after 595.348277ms: waiting for machine to come up
	I0108 22:15:31.675832  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:31.676249  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:31.676278  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:31.676209  376188 retry.go:31] will retry after 618.587146ms: waiting for machine to come up
	I0108 22:15:32.296396  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:32.296982  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:32.297011  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:32.296820  376188 retry.go:31] will retry after 730.322233ms: waiting for machine to come up
	I0108 22:15:29.877942  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.891002  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.891714  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.893908  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0108 22:15:29.901880  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.959729  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.975241  375205 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0108 22:15:29.975301  375205 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.975308  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.975351  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.022214  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.074289  375205 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0108 22:15:30.074350  375205 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:30.074422  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.107460  375205 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0108 22:15:30.107547  375205 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:30.107634  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.137086  375205 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0108 22:15:30.137155  375205 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:30.137227  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.156198  375205 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0108 22:15:30.156291  375205 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:30.156357  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163468  375205 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0108 22:15:30.163522  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:30.163532  375205 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:30.163563  375205 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0108 22:15:30.163616  375205 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.163654  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:30.163660  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163762  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:30.163779  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:30.163583  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163849  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:30.304360  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:30.304458  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0108 22:15:30.304478  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0108 22:15:30.304481  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:30.304564  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.304603  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.304568  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:30.304636  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:30.304678  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:30.304738  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:30.307415  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:30.307516  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:30.322465  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0108 22:15:30.322505  375205 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.322616  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.323275  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390462  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0108 22:15:30.390530  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0108 22:15:30.390546  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 22:15:30.390566  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390612  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390651  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:30.390657  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:32.649486  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.326834963s)
	I0108 22:15:32.649532  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0108 22:15:32.649560  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:32.649569  375205 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.258890537s)
	I0108 22:15:32.649612  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:32.649622  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0108 22:15:32.649573  375205 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.258898806s)
	I0108 22:15:32.649638  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0108 22:15:33.028658  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:33.029086  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:33.029117  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:33.029023  376188 retry.go:31] will retry after 1.009306133s: waiting for machine to come up
	I0108 22:15:34.040145  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:34.040574  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:34.040610  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:34.040517  376188 retry.go:31] will retry after 1.215287271s: waiting for machine to come up
	I0108 22:15:35.258130  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:35.258735  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:35.258767  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:35.258669  376188 retry.go:31] will retry after 1.604579686s: waiting for machine to come up
	I0108 22:15:36.865156  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:36.865635  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:36.865671  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:36.865575  376188 retry.go:31] will retry after 1.938816817s: waiting for machine to come up
	I0108 22:15:35.937824  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.288173217s)
	I0108 22:15:35.937859  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0108 22:15:35.937899  375205 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:35.938005  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:38.805792  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:38.806390  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:38.806420  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:38.806318  376188 retry.go:31] will retry after 2.933374936s: waiting for machine to come up
	I0108 22:15:41.741267  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:41.741924  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:41.741962  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:41.741850  376188 retry.go:31] will retry after 3.549554778s: waiting for machine to come up
	I0108 22:15:40.512566  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.574525189s)
	I0108 22:15:40.512605  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0108 22:15:40.512642  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:40.512699  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:43.180687  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.667951486s)
	I0108 22:15:43.180730  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0108 22:15:43.180766  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:43.180849  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:44.539187  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.35830707s)
	I0108 22:15:44.539234  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0108 22:15:44.539274  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:44.539335  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:45.294867  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:45.295522  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:45.295572  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:45.295439  376188 retry.go:31] will retry after 5.642834673s: waiting for machine to come up
	I0108 22:15:46.498360  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.95899411s)
	I0108 22:15:46.498392  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0108 22:15:46.498417  375205 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:46.498473  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:47.553626  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.055107765s)
	I0108 22:15:47.553672  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0108 22:15:47.553708  375205 cache_images.go:123] Successfully loaded all cached images
	I0108 22:15:47.553715  375205 cache_images.go:92] LoadImages completed in 17.852269213s
	I0108 22:15:47.553796  375205 ssh_runner.go:195] Run: crio config
	I0108 22:15:47.626385  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:15:47.626428  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:15:47.626471  375205 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:15:47.626503  375205 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.153 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-675668 NodeName:no-preload-675668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:15:47.626764  375205 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-675668"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:15:47.626889  375205 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-675668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-675668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:15:47.626994  375205 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0108 22:15:47.638161  375205 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:15:47.638263  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:15:47.648004  375205 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0108 22:15:47.667877  375205 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0108 22:15:47.685914  375205 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0108 22:15:47.705814  375205 ssh_runner.go:195] Run: grep 192.168.61.153	control-plane.minikube.internal$ /etc/hosts
	I0108 22:15:47.709842  375205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:47.724788  375205 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668 for IP: 192.168.61.153
	I0108 22:15:47.724877  375205 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:15:47.725349  375205 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:15:47.725420  375205 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:15:47.725541  375205 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.key
	I0108 22:15:47.725626  375205 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.key.0768d075
	I0108 22:15:47.725668  375205 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.key
	I0108 22:15:47.725793  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:15:47.725822  375205 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:15:47.725837  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:15:47.725861  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:15:47.725886  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:15:47.725908  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:15:47.725952  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:47.727130  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:15:47.753432  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:15:47.780962  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:15:47.807446  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:15:47.834334  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:15:47.861638  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:15:47.889479  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:15:47.916119  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:15:47.944635  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:15:47.971740  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:15:47.998594  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:15:48.025907  375205 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:15:48.044525  375205 ssh_runner.go:195] Run: openssl version
	I0108 22:15:48.050542  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:15:48.061205  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.066945  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.067060  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.074266  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:15:48.084613  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:15:48.095856  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.101596  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.101677  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.108991  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:15:48.120690  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:15:48.130747  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.135480  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.135576  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.141462  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:15:48.152597  375205 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:15:48.158657  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:15:48.165978  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:15:48.174164  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:15:48.181140  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:15:48.187819  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:15:48.194088  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:15:48.200487  375205 kubeadm.go:404] StartCluster: {Name:no-preload-675668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-675668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.153 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:15:48.200612  375205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:15:48.200686  375205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:15:48.244804  375205 cri.go:89] found id: ""
	I0108 22:15:48.244894  375205 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:15:48.255502  375205 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:15:48.255549  375205 kubeadm.go:636] restartCluster start
	I0108 22:15:48.255625  375205 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:15:48.265914  375205 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:48.267815  375205 kubeconfig.go:92] found "no-preload-675668" server: "https://192.168.61.153:8443"
	I0108 22:15:48.271555  375205 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:15:48.281619  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:48.281694  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:48.293360  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:48.781917  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:48.782063  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:48.795101  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:49.281683  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:49.281784  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:49.295392  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:49.781910  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:49.782011  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:49.795016  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.309259  375556 start.go:369] acquired machines lock for "default-k8s-diff-port-292054" in 4m6.099929885s
	I0108 22:15:52.309332  375556 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:52.309353  375556 fix.go:54] fixHost starting: 
	I0108 22:15:52.309795  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:52.309827  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:52.327510  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
	I0108 22:15:52.328130  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:52.328844  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:15:52.328877  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:52.329458  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:52.329740  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:15:52.329938  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:15:52.331851  375556 fix.go:102] recreateIfNeeded on default-k8s-diff-port-292054: state=Stopped err=<nil>
	I0108 22:15:52.331887  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	W0108 22:15:52.332071  375556 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:52.334604  375556 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-292054" ...
	I0108 22:15:50.942498  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.943038  375293 main.go:141] libmachine: (embed-certs-903819) Found IP for machine: 192.168.72.132
	I0108 22:15:50.943076  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has current primary IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.943087  375293 main.go:141] libmachine: (embed-certs-903819) Reserving static IP address...
	I0108 22:15:50.943577  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "embed-certs-903819", mac: "52:54:00:73:74:da", ip: "192.168.72.132"} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:50.943606  375293 main.go:141] libmachine: (embed-certs-903819) Reserved static IP address: 192.168.72.132
	I0108 22:15:50.943620  375293 main.go:141] libmachine: (embed-certs-903819) DBG | skip adding static IP to network mk-embed-certs-903819 - found existing host DHCP lease matching {name: "embed-certs-903819", mac: "52:54:00:73:74:da", ip: "192.168.72.132"}
	I0108 22:15:50.943636  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Getting to WaitForSSH function...
	I0108 22:15:50.943655  375293 main.go:141] libmachine: (embed-certs-903819) Waiting for SSH to be available...
	I0108 22:15:50.945879  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.946330  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:50.946362  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.946493  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Using SSH client type: external
	I0108 22:15:50.946532  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa (-rw-------)
	I0108 22:15:50.946589  375293 main.go:141] libmachine: (embed-certs-903819) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:15:50.946606  375293 main.go:141] libmachine: (embed-certs-903819) DBG | About to run SSH command:
	I0108 22:15:50.946641  375293 main.go:141] libmachine: (embed-certs-903819) DBG | exit 0
	I0108 22:15:51.051155  375293 main.go:141] libmachine: (embed-certs-903819) DBG | SSH cmd err, output: <nil>: 
	I0108 22:15:51.051655  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetConfigRaw
	I0108 22:15:51.052363  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:51.054890  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.055247  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.055276  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.055618  375293 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/config.json ...
	I0108 22:15:51.055862  375293 machine.go:88] provisioning docker machine ...
	I0108 22:15:51.055887  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:51.056117  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.056263  375293 buildroot.go:166] provisioning hostname "embed-certs-903819"
	I0108 22:15:51.056283  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.056427  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.058406  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.058775  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.058822  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.058953  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.059154  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.059318  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.059478  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.059654  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.060145  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.060166  375293 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-903819 && echo "embed-certs-903819" | sudo tee /etc/hostname
	I0108 22:15:51.207967  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-903819
	
	I0108 22:15:51.208007  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.210549  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.210848  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.210876  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.211120  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.211372  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.211539  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.211707  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.211879  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.212375  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.212399  375293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-903819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-903819/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-903819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:15:51.356887  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:51.356936  375293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:15:51.356968  375293 buildroot.go:174] setting up certificates
	I0108 22:15:51.356997  375293 provision.go:83] configureAuth start
	I0108 22:15:51.357012  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.357424  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:51.360156  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.360553  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.360590  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.360735  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.363438  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.363850  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.363905  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.364020  375293 provision.go:138] copyHostCerts
	I0108 22:15:51.364111  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:15:51.364126  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:15:51.364193  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:15:51.364332  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:15:51.364347  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:15:51.364376  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:15:51.364453  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:15:51.364463  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:15:51.364490  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:15:51.364552  375293 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.embed-certs-903819 san=[192.168.72.132 192.168.72.132 localhost 127.0.0.1 minikube embed-certs-903819]
	I0108 22:15:51.472949  375293 provision.go:172] copyRemoteCerts
	I0108 22:15:51.473023  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:15:51.473053  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.476622  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.476975  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.476997  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.477269  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.477524  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.477719  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.477852  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:51.576283  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:15:51.604809  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:15:51.633353  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:15:51.660375  375293 provision.go:86] duration metric: configureAuth took 303.352585ms
	I0108 22:15:51.660422  375293 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:15:51.660657  375293 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:15:51.660764  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.664337  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.664738  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.664796  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.665089  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.665394  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.665649  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.665823  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.666047  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.666595  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.666633  375293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:15:52.023397  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:15:52.023450  375293 machine.go:91] provisioned docker machine in 967.568803ms
	I0108 22:15:52.023469  375293 start.go:300] post-start starting for "embed-certs-903819" (driver="kvm2")
	I0108 22:15:52.023485  375293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:15:52.023514  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.023922  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:15:52.023979  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.026998  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.027417  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.027447  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.027665  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.027875  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.028050  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.028240  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.126087  375293 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:15:52.130371  375293 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:15:52.130414  375293 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:15:52.130509  375293 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:15:52.130609  375293 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:15:52.130738  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:15:52.139897  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:52.166648  375293 start.go:303] post-start completed in 143.156785ms
	I0108 22:15:52.166691  375293 fix.go:56] fixHost completed within 24.077726567s
	I0108 22:15:52.166721  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.169452  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.169849  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.169880  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.170156  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.170463  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.170716  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.170909  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.171089  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:52.171520  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:52.171535  375293 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:15:52.309104  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752152.251541184
	
	I0108 22:15:52.309136  375293 fix.go:206] guest clock: 1704752152.251541184
	I0108 22:15:52.309146  375293 fix.go:219] Guest: 2024-01-08 22:15:52.251541184 +0000 UTC Remote: 2024-01-08 22:15:52.166696501 +0000 UTC m=+279.417512277 (delta=84.844683ms)
	I0108 22:15:52.309173  375293 fix.go:190] guest clock delta is within tolerance: 84.844683ms
	I0108 22:15:52.309180  375293 start.go:83] releasing machines lock for "embed-certs-903819", held for 24.220254192s
	I0108 22:15:52.309214  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.309514  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:52.312538  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.312905  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.312928  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.313161  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313692  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313879  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313971  375293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:15:52.314031  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.314154  375293 ssh_runner.go:195] Run: cat /version.json
	I0108 22:15:52.314185  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.316938  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317214  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317363  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.317398  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.317425  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317456  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317599  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.317746  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.317803  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.317882  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.318074  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.318074  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.318273  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.318332  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.451292  375293 ssh_runner.go:195] Run: systemctl --version
	I0108 22:15:52.459839  375293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:15:52.609989  375293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:15:52.617215  375293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:15:52.617326  375293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:15:52.633017  375293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:15:52.633068  375293 start.go:475] detecting cgroup driver to use...
	I0108 22:15:52.633180  375293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:15:52.649947  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:15:52.664459  375293 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:15:52.664530  375293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:15:52.680105  375293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:15:52.696100  375293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:15:52.814616  375293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:15:52.951975  375293 docker.go:219] disabling docker service ...
	I0108 22:15:52.952086  375293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:15:52.967800  375293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:15:52.982903  375293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:15:53.107033  375293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:15:53.222765  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:15:53.238572  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:15:53.260919  375293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:15:53.261035  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.271980  375293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:15:53.272084  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.283693  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.298686  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.310543  375293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:15:53.322108  375293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:15:53.331904  375293 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:15:53.331982  375293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:15:53.347091  375293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:15:53.358365  375293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:15:53.462607  375293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:15:53.658267  375293 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:15:53.658362  375293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:15:53.663859  375293 start.go:543] Will wait 60s for crictl version
	I0108 22:15:53.663941  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:15:53.668413  375293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:15:53.714319  375293 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:15:53.714456  375293 ssh_runner.go:195] Run: crio --version
	I0108 22:15:53.774601  375293 ssh_runner.go:195] Run: crio --version
	I0108 22:15:53.840055  375293 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:15:50.282005  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:50.282118  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:50.296034  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:50.781676  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:50.781865  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:50.794250  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:51.281771  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:51.281866  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:51.296593  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:51.782094  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:51.782193  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:51.797110  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.281711  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:52.281844  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:52.294916  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.782076  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:52.782193  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:52.796700  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:53.282191  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:53.282320  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:53.300226  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:53.781708  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:53.781807  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:53.794426  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:54.281901  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:54.282005  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:54.305276  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:54.781646  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:54.781765  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:54.798991  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.336203  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Start
	I0108 22:15:52.336440  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring networks are active...
	I0108 22:15:52.337318  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring network default is active
	I0108 22:15:52.337660  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring network mk-default-k8s-diff-port-292054 is active
	I0108 22:15:52.338019  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Getting domain xml...
	I0108 22:15:52.338689  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Creating domain...
	I0108 22:15:53.715046  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting to get IP...
	I0108 22:15:53.716237  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.716849  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.716944  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:53.716801  376345 retry.go:31] will retry after 252.013763ms: waiting for machine to come up
	I0108 22:15:53.970408  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.971019  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.971049  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:53.970958  376345 retry.go:31] will retry after 266.473735ms: waiting for machine to come up
	I0108 22:15:54.239763  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.240226  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.240251  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:54.240173  376345 retry.go:31] will retry after 429.57645ms: waiting for machine to come up
	I0108 22:15:54.672202  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.672716  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.672752  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:54.672669  376345 retry.go:31] will retry after 585.093805ms: waiting for machine to come up
	I0108 22:15:55.259153  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.259706  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.259743  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:55.259654  376345 retry.go:31] will retry after 689.434093ms: waiting for machine to come up
	I0108 22:15:55.950610  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.951205  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.951239  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:55.951157  376345 retry.go:31] will retry after 895.874654ms: waiting for machine to come up
	I0108 22:15:53.841949  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:53.845797  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:53.846200  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:53.846248  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:53.846494  375293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0108 22:15:53.851791  375293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:53.866130  375293 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:15:53.866207  375293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:53.932186  375293 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:15:53.932311  375293 ssh_runner.go:195] Run: which lz4
	I0108 22:15:53.937259  375293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:15:53.944022  375293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:15:53.944077  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:15:55.993976  375293 crio.go:444] Took 2.056742 seconds to copy over tarball
	I0108 22:15:55.994073  375293 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:15:55.281653  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:55.281788  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:55.303179  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:55.781655  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:55.781803  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:55.801287  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:56.281804  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:56.281897  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:56.306479  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:56.782123  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:56.782248  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:56.799241  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:57.281778  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:57.281926  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:57.299917  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:57.782255  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:57.782392  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:57.797960  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:58.282738  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:58.282919  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:58.300271  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:58.300333  375205 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:15:58.300349  375205 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:15:58.300365  375205 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:15:58.300452  375205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:15:58.353658  375205 cri.go:89] found id: ""
	I0108 22:15:58.353755  375205 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:15:58.372503  375205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:15:58.393266  375205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:15:58.393366  375205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:15:58.406210  375205 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:15:58.406255  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:58.570457  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:59.811449  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.240942109s)
	I0108 22:15:59.811494  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:56.848455  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:56.848893  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:56.848925  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:56.848869  376345 retry.go:31] will retry after 1.095460706s: waiting for machine to come up
	I0108 22:15:57.946534  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:57.947045  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:57.947084  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:57.947000  376345 retry.go:31] will retry after 975.046116ms: waiting for machine to come up
	I0108 22:15:58.923872  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:58.924402  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:58.924436  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:58.924351  376345 retry.go:31] will retry after 1.855498831s: waiting for machine to come up
	I0108 22:16:00.781295  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:00.781813  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:00.781842  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:00.781745  376345 retry.go:31] will retry after 1.560909915s: waiting for machine to come up
	I0108 22:15:59.648230  375293 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.654100182s)
	I0108 22:15:59.648275  375293 crio.go:451] Took 3.654264 seconds to extract the tarball
	I0108 22:15:59.648293  375293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:15:59.707614  375293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:59.763291  375293 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:15:59.763318  375293 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:15:59.763416  375293 ssh_runner.go:195] Run: crio config
	I0108 22:15:59.840951  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:15:59.840986  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:15:59.841015  375293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:15:59.841038  375293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-903819 NodeName:embed-certs-903819 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:15:59.841205  375293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-903819"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:15:59.841283  375293 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-903819 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-903819 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:15:59.841341  375293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:15:59.854399  375293 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:15:59.854521  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:15:59.864630  375293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0108 22:15:59.887590  375293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:15:59.907618  375293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0108 22:15:59.930429  375293 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I0108 22:15:59.935347  375293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:59.954840  375293 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819 for IP: 192.168.72.132
	I0108 22:15:59.954893  375293 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:15:59.955092  375293 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:15:59.955151  375293 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:15:59.955277  375293 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/client.key
	I0108 22:15:59.955460  375293 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.key.b7fe571d
	I0108 22:15:59.955557  375293 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.key
	I0108 22:15:59.955780  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:15:59.955832  375293 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:15:59.955855  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:15:59.955897  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:15:59.955931  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:15:59.955962  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:15:59.956023  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:59.957003  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:15:59.984268  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:16:00.018065  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:00.049758  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:00.079731  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:00.115904  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:00.148655  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:00.186204  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:00.224356  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:00.258906  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:00.293420  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:00.328219  375293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:00.351811  375293 ssh_runner.go:195] Run: openssl version
	I0108 22:16:00.360327  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:00.373384  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.381553  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.381653  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.391609  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:00.406242  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:00.419455  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.426093  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.426218  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.433793  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:00.446550  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:00.463174  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.470386  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.470471  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.477752  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:00.492003  375293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:00.498273  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:00.506305  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:00.515120  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:00.523909  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:00.531966  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:00.540080  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:00.547673  375293 kubeadm.go:404] StartCluster: {Name:embed-certs-903819 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-903819 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:00.547852  375293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:00.547933  375293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:00.596555  375293 cri.go:89] found id: ""
	I0108 22:16:00.596644  375293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:00.607985  375293 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:00.608023  375293 kubeadm.go:636] restartCluster start
	I0108 22:16:00.608092  375293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:00.620669  375293 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:00.621860  375293 kubeconfig.go:92] found "embed-certs-903819" server: "https://192.168.72.132:8443"
	I0108 22:16:00.624246  375293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:00.638481  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:00.638578  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:00.658261  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:01.138670  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:01.138876  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:01.154778  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:01.639152  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:01.639290  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:01.659301  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:02.138679  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:02.138871  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:02.159427  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:02.638859  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:02.638970  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:02.660608  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:00.076906  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:00.244500  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:00.356164  375205 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:00.356290  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:00.856674  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:01.356420  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:01.857416  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:02.356778  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:02.857385  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:03.356493  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:03.379896  375205 api_server.go:72] duration metric: took 3.023730091s to wait for apiserver process to appear ...
	I0108 22:16:03.379953  375205 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:03.380023  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:02.344786  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:02.345408  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:02.345444  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:02.345339  376345 retry.go:31] will retry after 2.336202352s: waiting for machine to come up
	I0108 22:16:04.685192  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:04.685894  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:04.685947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:04.685809  376345 retry.go:31] will retry after 3.559467663s: waiting for machine to come up
	I0108 22:16:03.139113  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:03.139272  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:03.158043  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:03.638583  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:03.638737  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:03.659573  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:04.139075  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:04.139225  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:04.158993  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:04.638600  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:04.638766  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:04.657099  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:05.138627  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:05.138728  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:05.156654  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:05.639289  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:05.639436  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:05.658060  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:06.139303  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:06.139466  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:06.153866  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:06.638492  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:06.638651  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:06.656088  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.138685  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:07.138840  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:07.158365  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.638744  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:07.638838  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:07.656010  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.463229  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:07.463273  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:07.463299  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:07.534774  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:07.534812  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:07.880243  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:07.886835  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:07.886881  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:08.380688  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:08.385776  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:08.385821  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:08.880979  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:08.890142  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:08.890180  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:09.380526  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:09.385856  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 200:
	ok
	I0108 22:16:09.394800  375205 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:16:09.394838  375205 api_server.go:131] duration metric: took 6.014875532s to wait for apiserver health ...
	I0108 22:16:09.394851  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:16:09.394861  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:09.396785  375205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:09.398197  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:09.422683  375205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:09.464557  375205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:09.483416  375205 system_pods.go:59] 8 kube-system pods found
	I0108 22:16:09.483460  375205 system_pods.go:61] "coredns-76f75df574-v8fsw" [7d69f8ec-6684-49d0-8567-4032298a4e5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:09.483471  375205 system_pods.go:61] "etcd-no-preload-675668" [bc088c6e-5037-4e51-a021-2c5ac3c1c60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:09.483488  375205 system_pods.go:61] "kube-apiserver-no-preload-675668" [0bbdf118-c47c-4298-ae5e-a984729ec21e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:09.483497  375205 system_pods.go:61] "kube-controller-manager-no-preload-675668" [2c3ff259-60a7-4205-b55f-85fe2d8e340d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:09.483513  375205 system_pods.go:61] "kube-proxy-dnbvk" [1803ec6b-5bd3-4ebb-bfd5-3a1356a1f168] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:09.483522  375205 system_pods.go:61] "kube-scheduler-no-preload-675668" [47737c5e-b59a-4df0-ac7c-36525e17733c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:09.483532  375205 system_pods.go:61] "metrics-server-57f55c9bc5-pk8bm" [71c7c744-c5fa-41e7-a26f-c04c30379b97] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:09.483537  375205 system_pods.go:61] "storage-provisioner" [1266430c-beda-4fa1-a057-cb07b8bf1f89] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:09.483547  375205 system_pods.go:74] duration metric: took 18.952011ms to wait for pod list to return data ...
	I0108 22:16:09.483562  375205 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:09.502939  375205 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:09.502989  375205 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:09.503007  375205 node_conditions.go:105] duration metric: took 19.439582ms to run NodePressure ...
	I0108 22:16:09.503031  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:08.246675  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:08.247243  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:08.247302  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:08.247185  376345 retry.go:31] will retry after 3.860632675s: waiting for machine to come up
	I0108 22:16:08.139286  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:08.139413  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:08.155694  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:08.639385  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:08.639521  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:08.655368  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:09.139022  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:09.139171  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:09.153512  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:09.638642  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:09.638765  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:09.653202  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.138833  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:10.138924  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:10.153529  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.639273  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:10.639462  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:10.655947  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.655981  375293 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:10.655991  375293 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:10.656003  375293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:10.656082  375293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:10.706638  375293 cri.go:89] found id: ""
	I0108 22:16:10.706721  375293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:10.726540  375293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:10.739540  375293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:10.739619  375293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:10.751112  375293 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:10.751158  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:10.877306  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.453755  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.664034  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.778440  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.866216  375293 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:11.866364  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:12.366749  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.862826  374880 start.go:369] acquired machines lock for "old-k8s-version-079759" in 1m1.534060396s
	I0108 22:16:13.862908  374880 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:16:13.862922  374880 fix.go:54] fixHost starting: 
	I0108 22:16:13.863465  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:16:13.863514  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:16:13.890658  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0108 22:16:13.891256  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:16:13.891974  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:16:13.891997  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:16:13.892356  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:16:13.892526  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:13.892634  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:16:13.894503  374880 fix.go:102] recreateIfNeeded on old-k8s-version-079759: state=Stopped err=<nil>
	I0108 22:16:13.894532  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	W0108 22:16:13.894707  374880 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:16:13.896778  374880 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-079759" ...
	I0108 22:16:13.898346  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Start
	I0108 22:16:13.898517  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring networks are active...
	I0108 22:16:13.899441  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring network default is active
	I0108 22:16:13.899906  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring network mk-old-k8s-version-079759 is active
	I0108 22:16:13.900424  374880 main.go:141] libmachine: (old-k8s-version-079759) Getting domain xml...
	I0108 22:16:13.901232  374880 main.go:141] libmachine: (old-k8s-version-079759) Creating domain...
	I0108 22:16:10.069721  375205 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:10.077465  375205 kubeadm.go:787] kubelet initialised
	I0108 22:16:10.077494  375205 kubeadm.go:788] duration metric: took 7.739231ms waiting for restarted kubelet to initialise ...
	I0108 22:16:10.077503  375205 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:10.085099  375205 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-v8fsw" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:12.095498  375205 pod_ready.go:102] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:14.100054  375205 pod_ready.go:102] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:12.111578  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.112089  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Found IP for machine: 192.168.50.18
	I0108 22:16:12.112118  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Reserving static IP address...
	I0108 22:16:12.112138  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has current primary IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.112627  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-292054", mac: "52:54:00:8d:25:78", ip: "192.168.50.18"} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.112660  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Reserved static IP address: 192.168.50.18
	I0108 22:16:12.112684  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | skip adding static IP to network mk-default-k8s-diff-port-292054 - found existing host DHCP lease matching {name: "default-k8s-diff-port-292054", mac: "52:54:00:8d:25:78", ip: "192.168.50.18"}
	I0108 22:16:12.112706  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Getting to WaitForSSH function...
	I0108 22:16:12.112729  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for SSH to be available...
	I0108 22:16:12.115245  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.115723  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.115762  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.115881  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Using SSH client type: external
	I0108 22:16:12.115917  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa (-rw-------)
	I0108 22:16:12.115947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:16:12.115967  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | About to run SSH command:
	I0108 22:16:12.116013  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | exit 0
	I0108 22:16:12.221209  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | SSH cmd err, output: <nil>: 
	I0108 22:16:12.221755  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetConfigRaw
	I0108 22:16:12.222634  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:12.225565  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.226008  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.226036  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.226326  375556 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:16:12.226626  375556 machine.go:88] provisioning docker machine ...
	I0108 22:16:12.226658  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:12.226946  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.227160  375556 buildroot.go:166] provisioning hostname "default-k8s-diff-port-292054"
	I0108 22:16:12.227187  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.227381  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.230424  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.230867  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.230918  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.231036  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.231302  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.231511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.231674  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.231856  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:12.232448  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:12.232476  375556 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-292054 && echo "default-k8s-diff-port-292054" | sudo tee /etc/hostname
	I0108 22:16:12.382972  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-292054
	
	I0108 22:16:12.383015  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.386658  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.387055  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.387110  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.387410  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.387780  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.388020  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.388284  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.388576  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:12.388935  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:12.388954  375556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-292054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-292054/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-292054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:12.536473  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:12.536514  375556 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:16:12.536597  375556 buildroot.go:174] setting up certificates
	I0108 22:16:12.536619  375556 provision.go:83] configureAuth start
	I0108 22:16:12.536638  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.536995  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:12.540248  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.540775  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.540813  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.540982  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.544343  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.544924  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.544986  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.545143  375556 provision.go:138] copyHostCerts
	I0108 22:16:12.545241  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:16:12.545256  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:16:12.545329  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:16:12.545468  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:16:12.545485  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:16:12.545525  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:16:12.545603  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:16:12.545612  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:16:12.545630  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:16:12.545717  375556 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-292054 san=[192.168.50.18 192.168.50.18 localhost 127.0.0.1 minikube default-k8s-diff-port-292054]
	I0108 22:16:12.853268  375556 provision.go:172] copyRemoteCerts
	I0108 22:16:12.853332  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:12.853359  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.856503  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.856926  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.856959  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.857295  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.857536  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.857699  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.857904  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:12.961751  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:12.999065  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 22:16:13.037282  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:16:13.075006  375556 provision.go:86] duration metric: configureAuth took 538.367435ms
	I0108 22:16:13.075048  375556 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:13.075403  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:16:13.075509  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.078643  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.079141  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.079187  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.079518  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.079765  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.079976  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.080145  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.080388  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:13.080860  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:13.080891  375556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:16:13.523316  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:16:13.523355  375556 machine.go:91] provisioned docker machine in 1.296708962s
	I0108 22:16:13.523391  375556 start.go:300] post-start starting for "default-k8s-diff-port-292054" (driver="kvm2")
	I0108 22:16:13.523427  375556 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:16:13.523458  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.523937  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:16:13.523982  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.528392  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.528941  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.529005  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.529344  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.529715  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.529947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.530160  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:13.644605  375556 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:16:13.651917  375556 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:16:13.651970  375556 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:16:13.652120  375556 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:16:13.652268  375556 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:16:13.652452  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:16:13.667715  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:13.707995  375556 start.go:303] post-start completed in 184.580746ms
	I0108 22:16:13.708032  375556 fix.go:56] fixHost completed within 21.398677633s
	I0108 22:16:13.708061  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.712186  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.712754  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.712785  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.713001  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.713308  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.713572  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.713784  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.714062  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:13.714576  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:13.714597  375556 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:16:13.862558  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752173.800899341
	
	I0108 22:16:13.862600  375556 fix.go:206] guest clock: 1704752173.800899341
	I0108 22:16:13.862613  375556 fix.go:219] Guest: 2024-01-08 22:16:13.800899341 +0000 UTC Remote: 2024-01-08 22:16:13.708038237 +0000 UTC m=+267.678081968 (delta=92.861104ms)
	I0108 22:16:13.862688  375556 fix.go:190] guest clock delta is within tolerance: 92.861104ms
	I0108 22:16:13.862700  375556 start.go:83] releasing machines lock for "default-k8s-diff-port-292054", held for 21.553389859s
	I0108 22:16:13.862760  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.863344  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:13.867702  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.868132  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.868160  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.868553  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869294  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869606  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869710  375556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:16:13.869908  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.870024  375556 ssh_runner.go:195] Run: cat /version.json
	I0108 22:16:13.870055  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.874047  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.874604  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.874637  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876082  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876102  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.876135  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.876339  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876083  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.876354  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.876518  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.876771  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.876808  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.876928  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:13.877140  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:14.020544  375556 ssh_runner.go:195] Run: systemctl --version
	I0108 22:16:14.030180  375556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:16:14.192218  375556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:16:14.200925  375556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:16:14.201038  375556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:16:14.223169  375556 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:16:14.223200  375556 start.go:475] detecting cgroup driver to use...
	I0108 22:16:14.223274  375556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:16:14.246782  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:16:14.264283  375556 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:16:14.264417  375556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:16:14.281460  375556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:16:14.295968  375556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:16:14.443907  375556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:16:14.611299  375556 docker.go:219] disabling docker service ...
	I0108 22:16:14.611425  375556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:16:14.630493  375556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:16:14.649912  375556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:16:14.787666  375556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:16:14.971826  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:16:15.004969  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:16:15.032889  375556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:16:15.032982  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.050131  375556 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:16:15.050223  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.066011  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.082365  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.098387  375556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:16:15.115648  375556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:16:15.129675  375556 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:16:15.129848  375556 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:16:15.151333  375556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:16:15.165637  375556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:16:15.308416  375556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:16:15.580204  375556 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:16:15.580284  375556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:16:15.587895  375556 start.go:543] Will wait 60s for crictl version
	I0108 22:16:15.588108  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:16:15.594471  375556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:16:15.645175  375556 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:16:15.645273  375556 ssh_runner.go:195] Run: crio --version
	I0108 22:16:15.707630  375556 ssh_runner.go:195] Run: crio --version
	I0108 22:16:15.779275  375556 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:16:15.781032  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:15.784486  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:15.784896  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:15.784965  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:15.785126  375556 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0108 22:16:15.790707  375556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:15.810441  375556 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:16:15.810515  375556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:15.867423  375556 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:16:15.867591  375556 ssh_runner.go:195] Run: which lz4
	I0108 22:16:15.873029  375556 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:16:15.879394  375556 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:16:15.879500  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:16:12.867258  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.367211  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.866433  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.366622  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.866611  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.907073  375293 api_server.go:72] duration metric: took 3.040854669s to wait for apiserver process to appear ...
	I0108 22:16:14.907116  375293 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:14.907141  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:15.738179  374880 main.go:141] libmachine: (old-k8s-version-079759) Waiting to get IP...
	I0108 22:16:15.739231  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:15.739808  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:15.739893  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:15.739787  376492 retry.go:31] will retry after 271.587986ms: waiting for machine to come up
	I0108 22:16:16.013648  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.014344  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.014388  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.014267  376492 retry.go:31] will retry after 376.425749ms: waiting for machine to come up
	I0108 22:16:16.392497  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.392985  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.393013  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.392894  376492 retry.go:31] will retry after 340.776058ms: waiting for machine to come up
	I0108 22:16:16.735696  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.736412  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.736452  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.736349  376492 retry.go:31] will retry after 559.6759ms: waiting for machine to come up
	I0108 22:16:17.297397  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:17.297990  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:17.298027  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:17.297965  376492 retry.go:31] will retry after 738.214425ms: waiting for machine to come up
	I0108 22:16:18.038578  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:18.039239  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:18.039269  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:18.039120  376492 retry.go:31] will retry after 762.268706ms: waiting for machine to come up
	I0108 22:16:18.803986  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:18.804560  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:18.804589  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:18.804438  376492 retry.go:31] will retry after 1.027542644s: waiting for machine to come up
	I0108 22:16:15.104174  375205 pod_ready.go:92] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:15.104208  375205 pod_ready.go:81] duration metric: took 5.01907031s waiting for pod "coredns-76f75df574-v8fsw" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:15.104223  375205 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:17.117526  375205 pod_ready.go:102] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:19.615842  375205 pod_ready.go:102] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:17.971748  375556 crio.go:444] Took 2.098761 seconds to copy over tarball
	I0108 22:16:17.971905  375556 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:16:19.481826  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:19.481865  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:19.481883  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:19.529381  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:19.529427  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:19.907613  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:19.914772  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:19.914824  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:20.407461  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:20.418184  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:20.418238  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:20.908072  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:20.920042  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:20.920085  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:21.407506  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:21.414375  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I0108 22:16:21.428398  375293 api_server.go:141] control plane version: v1.28.4
	I0108 22:16:21.428439  375293 api_server.go:131] duration metric: took 6.521312808s to wait for apiserver health ...
	I0108 22:16:21.428451  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:16:21.428460  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:21.920874  375293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:22.268512  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:22.284953  375293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:22.309346  375293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:22.465452  375293 system_pods.go:59] 9 kube-system pods found
	I0108 22:16:22.465501  375293 system_pods.go:61] "coredns-5dd5756b68-wxfs6" [965cab31-c39a-4885-bc6f-6575fe026794] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:22.465516  375293 system_pods.go:61] "coredns-5dd5756b68-zbjfn" [1b521296-8e4c-4252-a729-5727cd71d3f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:22.465534  375293 system_pods.go:61] "etcd-embed-certs-903819" [be30d1b3-e4a8-4daf-9c0e-f3b776499471] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:22.465546  375293 system_pods.go:61] "kube-apiserver-embed-certs-903819" [530546d9-1cec-45f5-9e3e-f5d08e913cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:22.465563  375293 system_pods.go:61] "kube-controller-manager-embed-certs-903819" [bb0d60c9-cdaf-491d-aa20-5a522f351e17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:22.465573  375293 system_pods.go:61] "kube-proxy-gjlx8" [9247e922-69de-4e59-a6d2-06c791d43031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:22.465586  375293 system_pods.go:61] "kube-scheduler-embed-certs-903819" [1aa50057-5aa4-44b2-a762-6f0eee5b3856] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:22.465602  375293 system_pods.go:61] "metrics-server-57f55c9bc5-jswgz" [8f18e01f-981d-48fe-9ce6-5155794da657] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:22.465614  375293 system_pods.go:61] "storage-provisioner" [ea2ac609-5857-4597-9432-e2f4f4630ee2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:22.465629  375293 system_pods.go:74] duration metric: took 156.242171ms to wait for pod list to return data ...
	I0108 22:16:22.465643  375293 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:22.523465  375293 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:22.523529  375293 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:22.523552  375293 node_conditions.go:105] duration metric: took 57.897769ms to run NodePressure ...
	I0108 22:16:22.523585  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:19.833814  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:19.834296  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:19.834341  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:19.834229  376492 retry.go:31] will retry after 1.469300536s: waiting for machine to come up
	I0108 22:16:21.305138  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:21.305962  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:21.306001  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:21.305834  376492 retry.go:31] will retry after 1.215696449s: waiting for machine to come up
	I0108 22:16:22.523937  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:22.524780  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:22.524813  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:22.524676  376492 retry.go:31] will retry after 1.652609537s: waiting for machine to come up
	I0108 22:16:24.179958  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:24.180881  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:24.180910  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:24.180780  376492 retry.go:31] will retry after 2.03835476s: waiting for machine to come up
	I0108 22:16:21.115112  375205 pod_ready.go:92] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.115153  375205 pod_ready.go:81] duration metric: took 6.010921481s waiting for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.115169  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.130056  375205 pod_ready.go:92] pod "kube-apiserver-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.130113  375205 pod_ready.go:81] duration metric: took 14.932775ms waiting for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.130137  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.149011  375205 pod_ready.go:92] pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.149054  375205 pod_ready.go:81] duration metric: took 18.905543ms waiting for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.149071  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dnbvk" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.162994  375205 pod_ready.go:92] pod "kube-proxy-dnbvk" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.163037  375205 pod_ready.go:81] duration metric: took 13.956516ms waiting for pod "kube-proxy-dnbvk" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.163053  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.172926  375205 pod_ready.go:92] pod "kube-scheduler-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.172975  375205 pod_ready.go:81] duration metric: took 9.906476ms waiting for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.172991  375205 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:23.182086  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:22.162439  375556 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.190451334s)
	I0108 22:16:22.162503  375556 crio.go:451] Took 4.190696 seconds to extract the tarball
	I0108 22:16:22.162522  375556 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:16:22.212617  375556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:22.290948  375556 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:16:22.290982  375556 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:16:22.291067  375556 ssh_runner.go:195] Run: crio config
	I0108 22:16:22.361099  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:16:22.361135  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:22.361166  375556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:16:22.361192  375556 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.18 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-292054 NodeName:default-k8s-diff-port-292054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:16:22.361488  375556 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.18
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-292054"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:16:22.361599  375556 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-292054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 22:16:22.361681  375556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:16:22.376350  375556 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:16:22.376489  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:16:22.389808  375556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0108 22:16:22.414305  375556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:16:22.433716  375556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0108 22:16:22.461925  375556 ssh_runner.go:195] Run: grep 192.168.50.18	control-plane.minikube.internal$ /etc/hosts
	I0108 22:16:22.467236  375556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:22.484487  375556 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054 for IP: 192.168.50.18
	I0108 22:16:22.484537  375556 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:16:22.484688  375556 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:16:22.484724  375556 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:16:22.484794  375556 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/client.key
	I0108 22:16:22.484845  375556 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.key.4ed28ecc
	I0108 22:16:22.484886  375556 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.key
	I0108 22:16:22.485012  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:16:22.485042  375556 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:16:22.485056  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:16:22.485077  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:16:22.485107  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:16:22.485133  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:16:22.485182  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:22.485917  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:16:22.516640  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:16:22.554723  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:22.589730  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:22.624933  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:22.656950  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:22.691213  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:22.725882  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:22.757465  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:22.789479  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:22.818877  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:22.848834  375556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:22.869951  375556 ssh_runner.go:195] Run: openssl version
	I0108 22:16:22.877921  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:22.892998  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.899697  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.899798  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.906225  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:22.918957  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:22.930809  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.937461  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.937595  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.945257  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:22.956453  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:22.969894  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.976162  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.976249  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.983601  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:22.995487  375556 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:23.002869  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:23.011231  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:23.019450  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:23.028645  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:23.036530  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:23.044216  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:23.050779  375556 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:23.050875  375556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:23.050968  375556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:23.098736  375556 cri.go:89] found id: ""
	I0108 22:16:23.098806  375556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:23.110702  375556 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:23.110738  375556 kubeadm.go:636] restartCluster start
	I0108 22:16:23.110807  375556 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:23.122131  375556 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.124018  375556 kubeconfig.go:92] found "default-k8s-diff-port-292054" server: "https://192.168.50.18:8444"
	I0108 22:16:23.127827  375556 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:23.141921  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:23.142029  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:23.155738  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.642320  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:23.642416  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:23.655783  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:24.142361  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:24.142522  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:24.161739  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:24.642247  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:24.642392  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:24.659564  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:25.142097  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:25.142341  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:25.156773  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:25.642249  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:25.642362  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:25.655785  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.802042  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.278422708s)
	I0108 22:16:23.802099  375293 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:23.816719  375293 kubeadm.go:787] kubelet initialised
	I0108 22:16:23.816770  375293 kubeadm.go:788] duration metric: took 14.659036ms waiting for restarted kubelet to initialise ...
	I0108 22:16:23.816787  375293 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:23.831999  375293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:25.843652  375293 pod_ready.go:102] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:26.220729  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:26.221388  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:26.221424  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:26.221322  376492 retry.go:31] will retry after 2.215929666s: waiting for machine to come up
	I0108 22:16:28.440185  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:28.440859  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:28.440894  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:28.440781  376492 retry.go:31] will retry after 4.455149908s: waiting for machine to come up
	I0108 22:16:25.184929  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:27.682851  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:29.685033  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:26.142553  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:26.142728  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:26.160691  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:26.642356  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:26.642469  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:26.656481  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.142104  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:27.142265  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:27.157378  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.642473  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:27.642577  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:27.656662  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:28.142925  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:28.143080  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:28.160815  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:28.642072  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:28.642188  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:28.662580  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:29.142008  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:29.142158  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:29.161132  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:29.642780  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:29.642919  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:29.661247  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:30.142588  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:30.142747  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:30.159262  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:30.642472  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:30.642650  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:30.659741  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.847129  375293 pod_ready.go:102] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:30.347456  375293 pod_ready.go:92] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:30.347490  375293 pod_ready.go:81] duration metric: took 6.51546229s waiting for pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.347501  375293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.354929  375293 pod_ready.go:92] pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:30.354955  375293 pod_ready.go:81] duration metric: took 7.447354ms waiting for pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.354965  375293 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.867755  375293 pod_ready.go:92] pod "etcd-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.867788  375293 pod_ready.go:81] duration metric: took 1.512815387s waiting for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.867801  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.875662  375293 pod_ready.go:92] pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.875711  375293 pod_ready.go:81] duration metric: took 7.899159ms waiting for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.875730  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.885348  375293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.885395  375293 pod_ready.go:81] duration metric: took 9.655438ms waiting for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.885410  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gjlx8" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.943389  375293 pod_ready.go:92] pod "kube-proxy-gjlx8" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.943424  375293 pod_ready.go:81] duration metric: took 58.006295ms waiting for pod "kube-proxy-gjlx8" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.943435  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.337716  375293 pod_ready.go:92] pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:32.337752  375293 pod_ready.go:81] duration metric: took 394.305103ms waiting for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.337763  375293 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.901098  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:32.901564  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:32.901601  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:32.901488  376492 retry.go:31] will retry after 3.655042594s: waiting for machine to come up
	I0108 22:16:32.182102  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:34.685634  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:31.142410  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:31.142532  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:31.156191  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:31.642990  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:31.643137  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:31.656623  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:32.142116  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:32.142225  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:32.155597  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:32.642804  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:32.642897  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:32.656038  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:33.142630  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:33.142742  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:33.155977  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:33.156022  375556 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:33.156049  375556 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:33.156064  375556 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:33.156127  375556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:33.205442  375556 cri.go:89] found id: ""
	I0108 22:16:33.205556  375556 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:33.225775  375556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:33.236014  375556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:33.236122  375556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:33.246331  375556 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:33.246385  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:33.389338  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.044093  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.279910  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.436859  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.536169  375556 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:34.536274  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:35.036740  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:35.536732  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:36.036604  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:34.346227  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.347971  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.558150  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.558817  374880 main.go:141] libmachine: (old-k8s-version-079759) Found IP for machine: 192.168.39.183
	I0108 22:16:36.558839  374880 main.go:141] libmachine: (old-k8s-version-079759) Reserving static IP address...
	I0108 22:16:36.558855  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has current primary IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.559397  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "old-k8s-version-079759", mac: "52:54:00:79:02:7b", ip: "192.168.39.183"} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.559451  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | skip adding static IP to network mk-old-k8s-version-079759 - found existing host DHCP lease matching {name: "old-k8s-version-079759", mac: "52:54:00:79:02:7b", ip: "192.168.39.183"}
	I0108 22:16:36.559471  374880 main.go:141] libmachine: (old-k8s-version-079759) Reserved static IP address: 192.168.39.183
	I0108 22:16:36.559495  374880 main.go:141] libmachine: (old-k8s-version-079759) Waiting for SSH to be available...
	I0108 22:16:36.559511  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Getting to WaitForSSH function...
	I0108 22:16:36.562077  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.562439  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.562496  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.562806  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Using SSH client type: external
	I0108 22:16:36.562846  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa (-rw-------)
	I0108 22:16:36.562938  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:16:36.562985  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | About to run SSH command:
	I0108 22:16:36.563005  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | exit 0
	I0108 22:16:36.655957  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | SSH cmd err, output: <nil>: 
	I0108 22:16:36.656393  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetConfigRaw
	I0108 22:16:36.657349  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:36.660624  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.661056  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.661097  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.661415  374880 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/config.json ...
	I0108 22:16:36.661673  374880 machine.go:88] provisioning docker machine ...
	I0108 22:16:36.661699  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:36.662007  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.662224  374880 buildroot.go:166] provisioning hostname "old-k8s-version-079759"
	I0108 22:16:36.662249  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.662416  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.665572  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.666013  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.666056  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.666311  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:36.666582  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.666770  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.666945  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:36.667141  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:36.667677  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:36.667700  374880 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-079759 && echo "old-k8s-version-079759" | sudo tee /etc/hostname
	I0108 22:16:36.813113  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-079759
	
	I0108 22:16:36.813174  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.816444  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.816774  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.816814  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.816995  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:36.817323  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.817559  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.817739  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:36.817969  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:36.818431  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:36.818461  374880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-079759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-079759/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-079759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:36.952252  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:36.952306  374880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:16:36.952343  374880 buildroot.go:174] setting up certificates
	I0108 22:16:36.952359  374880 provision.go:83] configureAuth start
	I0108 22:16:36.952372  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.952803  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:36.955895  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.956276  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.956310  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.956579  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.959251  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.959667  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.959723  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.959825  374880 provision.go:138] copyHostCerts
	I0108 22:16:36.959896  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:16:36.959909  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:16:36.959987  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:16:36.960106  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:16:36.960122  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:16:36.960152  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:16:36.960240  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:16:36.960251  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:16:36.960286  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:16:36.960370  374880 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-079759 san=[192.168.39.183 192.168.39.183 localhost 127.0.0.1 minikube old-k8s-version-079759]
	I0108 22:16:37.054312  374880 provision.go:172] copyRemoteCerts
	I0108 22:16:37.054396  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:37.054428  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.058048  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.058545  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.058580  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.058823  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.059165  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.059439  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.059614  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.158033  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:16:37.190220  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:37.219035  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 22:16:37.246894  374880 provision.go:86] duration metric: configureAuth took 294.516334ms
	I0108 22:16:37.246938  374880 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:37.247165  374880 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:16:37.247269  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.250766  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.251305  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.251344  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.251654  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.251992  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.252253  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.252456  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.252701  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:37.253066  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:37.253091  374880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:16:37.626837  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:16:37.626886  374880 machine.go:91] provisioned docker machine in 965.198968ms
	I0108 22:16:37.626899  374880 start.go:300] post-start starting for "old-k8s-version-079759" (driver="kvm2")
	I0108 22:16:37.626924  374880 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:16:37.626991  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.627562  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:16:37.627626  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.631567  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.631840  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.631876  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.632070  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.632322  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.632578  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.632749  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.732984  374880 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:16:37.740111  374880 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:16:37.740158  374880 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:16:37.740268  374880 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:16:37.740384  374880 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:16:37.740527  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:16:37.751840  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:37.780796  374880 start.go:303] post-start completed in 153.87709ms
	I0108 22:16:37.780833  374880 fix.go:56] fixHost completed within 23.917911044s
	I0108 22:16:37.780861  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.784200  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.784663  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.784698  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.784916  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.785192  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.785482  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.785652  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.785819  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:37.786310  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:37.786334  374880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:16:37.908632  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752197.846451761
	
	I0108 22:16:37.908664  374880 fix.go:206] guest clock: 1704752197.846451761
	I0108 22:16:37.908677  374880 fix.go:219] Guest: 2024-01-08 22:16:37.846451761 +0000 UTC Remote: 2024-01-08 22:16:37.780837729 +0000 UTC m=+368.040141999 (delta=65.614032ms)
	I0108 22:16:37.908740  374880 fix.go:190] guest clock delta is within tolerance: 65.614032ms
	I0108 22:16:37.908756  374880 start.go:83] releasing machines lock for "old-k8s-version-079759", held for 24.045885784s
	I0108 22:16:37.908801  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.909113  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:37.912363  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.912708  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.912745  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.913058  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913581  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913769  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913860  374880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:16:37.913906  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.914052  374880 ssh_runner.go:195] Run: cat /version.json
	I0108 22:16:37.914081  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.916674  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917009  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917330  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.917371  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917433  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.917523  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.917545  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917622  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.917791  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.917862  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.917973  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.918026  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.918185  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.918303  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:38.009398  374880 ssh_runner.go:195] Run: systemctl --version
	I0108 22:16:38.040945  374880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:16:38.191198  374880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:16:38.198405  374880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:16:38.198504  374880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:16:38.218602  374880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:16:38.218641  374880 start.go:475] detecting cgroup driver to use...
	I0108 22:16:38.218722  374880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:16:38.234161  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:16:38.250033  374880 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:16:38.250107  374880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:16:38.266262  374880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:16:38.281553  374880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:16:38.402503  374880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:16:38.558016  374880 docker.go:219] disabling docker service ...
	I0108 22:16:38.558124  374880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:16:38.573689  374880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:16:38.589002  374880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:16:38.718943  374880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:16:38.853252  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:16:38.869464  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:16:38.890384  374880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 22:16:38.890538  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.904645  374880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:16:38.904745  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.916308  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.927747  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.938877  374880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:16:38.951536  374880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:16:38.961810  374880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:16:38.961889  374880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:16:38.976131  374880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:16:38.990253  374880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:16:39.129313  374880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:16:39.322691  374880 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:16:39.322796  374880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:16:39.329204  374880 start.go:543] Will wait 60s for crictl version
	I0108 22:16:39.329317  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:39.333991  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:16:39.381363  374880 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:16:39.381484  374880 ssh_runner.go:195] Run: crio --version
	I0108 22:16:39.435964  374880 ssh_runner.go:195] Run: crio --version
	I0108 22:16:39.499543  374880 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0108 22:16:39.501084  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:39.504205  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:39.504541  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:39.504579  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:39.504935  374880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 22:16:39.510323  374880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:39.526998  374880 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:16:39.527057  374880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:39.577709  374880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0108 22:16:39.577793  374880 ssh_runner.go:195] Run: which lz4
	I0108 22:16:39.582925  374880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:16:39.589373  374880 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:16:39.589421  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0108 22:16:37.184707  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:39.683810  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.537007  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:37.037157  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:37.061202  375556 api_server.go:72] duration metric: took 2.525037167s to wait for apiserver process to appear ...
	I0108 22:16:37.061229  375556 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:37.061250  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:37.061790  375556 api_server.go:269] stopped: https://192.168.50.18:8444/healthz: Get "https://192.168.50.18:8444/healthz": dial tcp 192.168.50.18:8444: connect: connection refused
	I0108 22:16:37.561995  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:38.852752  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:41.361118  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:42.562614  375556 api_server.go:269] stopped: https://192.168.50.18:8444/healthz: Get "https://192.168.50.18:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 22:16:42.562680  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:42.626918  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:42.626956  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:43.061435  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:43.078776  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:43.078841  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:43.561364  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:43.575304  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:43.575397  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:44.061694  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:44.072328  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:44.072394  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:44.561536  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:44.572055  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 200:
	ok
	I0108 22:16:44.586947  375556 api_server.go:141] control plane version: v1.28.4
	I0108 22:16:44.587011  375556 api_server.go:131] duration metric: took 7.52577273s to wait for apiserver health ...
	I0108 22:16:44.587029  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:16:44.587040  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:44.765569  375556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:41.520470  374880 crio.go:444] Took 1.937584 seconds to copy over tarball
	I0108 22:16:41.520541  374880 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:16:41.683864  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:44.183495  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:44.867194  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:44.881203  375556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:44.906051  375556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:44.958770  375556 system_pods.go:59] 8 kube-system pods found
	I0108 22:16:44.958813  375556 system_pods.go:61] "coredns-5dd5756b68-vcmh6" [4d87af85-075d-427c-b4ca-ba57421fc8de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:44.958823  375556 system_pods.go:61] "etcd-default-k8s-diff-port-292054" [5353bc6f-061b-414b-823b-fa224887733c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:44.958831  375556 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-292054" [aa609bfc-ba8f-4d82-bdcd-2f17e0b1b2a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:44.958838  375556 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-292054" [2500070d-a348-47a9-a1d6-525eb3ee12d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:44.958847  375556 system_pods.go:61] "kube-proxy-f4xsp" [d0987c89-c598-4ae9-a60a-bad8df066d0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:44.958867  375556 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-292054" [9b4e73b7-a4ff-469f-b03e-1170d068af2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:44.958883  375556 system_pods.go:61] "metrics-server-57f55c9bc5-6w57p" [7a85be99-ad7e-4866-a8d8-0972435dfd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:44.958899  375556 system_pods.go:61] "storage-provisioner" [4be6edbe-cb8e-4598-9d23-1cefc0afc184] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:44.958908  375556 system_pods.go:74] duration metric: took 52.82566ms to wait for pod list to return data ...
	I0108 22:16:44.958923  375556 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:44.965171  375556 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:44.965220  375556 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:44.965235  375556 node_conditions.go:105] duration metric: took 6.306299ms to run NodePressure ...
	I0108 22:16:44.965271  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:43.845812  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:45.851004  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:45.115268  374880 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.594690355s)
	I0108 22:16:45.115304  374880 crio.go:451] Took 3.594805 seconds to extract the tarball
	I0108 22:16:45.115316  374880 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:16:45.165012  374880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:45.542219  374880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0108 22:16:45.542266  374880 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:16:45.542362  374880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:45.542384  374880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.542409  374880 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 22:16:45.542451  374880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.542489  374880 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.542392  374880 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.542666  374880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.542661  374880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.543883  374880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.543921  374880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.543888  374880 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 22:16:45.543944  374880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.543888  374880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:45.543970  374880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.543895  374880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.544327  374880 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.737830  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.747956  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0108 22:16:45.780688  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.799788  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.811226  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.819948  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.857132  374880 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0108 22:16:45.857195  374880 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.857257  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.867494  374880 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0108 22:16:45.867547  374880 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0108 22:16:45.867622  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.871438  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.900657  374880 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0108 22:16:45.900706  374880 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.900755  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.986789  374880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0108 22:16:45.986850  374880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.986909  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.001283  374880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0108 22:16:46.001335  374880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:46.001389  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.009750  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0108 22:16:46.009783  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0108 22:16:46.009830  374880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0108 22:16:46.009848  374880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0108 22:16:46.009879  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:46.009887  374880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:46.009887  374880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:46.009904  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:46.009929  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.009967  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:46.009933  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.173258  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0108 22:16:46.173293  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 22:16:46.173387  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:46.173402  374880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.173451  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0108 22:16:46.173458  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0108 22:16:46.173539  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:46.173588  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0108 22:16:46.238533  374880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0108 22:16:46.238562  374880 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.238589  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0108 22:16:46.238619  374880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.238692  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0108 22:16:46.499734  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:47.197262  374880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0108 22:16:47.197344  374880 cache_images.go:92] LoadImages completed in 1.65506117s
	W0108 22:16:47.197431  374880 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0108 22:16:47.197628  374880 ssh_runner.go:195] Run: crio config
	I0108 22:16:47.273121  374880 cni.go:84] Creating CNI manager for ""
	I0108 22:16:47.273164  374880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:47.273206  374880 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:16:47.273242  374880 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-079759 NodeName:old-k8s-version-079759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 22:16:47.273439  374880 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-079759"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-079759
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.183:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:16:47.273557  374880 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-079759 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079759 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:16:47.273641  374880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 22:16:47.284374  374880 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:16:47.284528  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:16:47.295740  374880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 22:16:47.317874  374880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:16:47.339820  374880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0108 22:16:47.365063  374880 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0108 22:16:47.369942  374880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:47.387586  374880 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759 for IP: 192.168.39.183
	I0108 22:16:47.387637  374880 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:16:47.387862  374880 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:16:47.387929  374880 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:16:47.388036  374880 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.key
	I0108 22:16:47.388144  374880 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.key.a2b84326
	I0108 22:16:47.388185  374880 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.key
	I0108 22:16:47.388370  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:16:47.388426  374880 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:16:47.388449  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:16:47.388490  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:16:47.388524  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:16:47.388562  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:16:47.388629  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:47.389626  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:16:47.424129  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:16:47.455835  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:47.489732  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:47.523253  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:47.555019  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:47.587218  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:47.620629  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:47.654460  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:47.688945  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:47.722824  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:47.754016  374880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:47.773665  374880 ssh_runner.go:195] Run: openssl version
	I0108 22:16:47.779972  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:47.794327  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.801998  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.802101  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.808765  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:47.822088  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:47.836322  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.843412  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.843508  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.852467  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:47.871573  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:47.886132  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.892165  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.892250  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.898728  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:47.911118  374880 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:47.918486  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:47.928188  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:47.936324  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:47.942939  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:47.952136  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:47.962062  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:47.969861  374880 kubeadm.go:404] StartCluster: {Name:old-k8s-version-079759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-079759 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:47.969986  374880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:47.970065  374880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:48.023933  374880 cri.go:89] found id: ""
	I0108 22:16:48.024025  374880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:48.040341  374880 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:48.040377  374880 kubeadm.go:636] restartCluster start
	I0108 22:16:48.040461  374880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:48.051709  374880 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:48.053467  374880 kubeconfig.go:92] found "old-k8s-version-079759" server: "https://192.168.39.183:8443"
	I0108 22:16:48.057824  374880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:48.071248  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:48.071367  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:48.086864  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:48.572297  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:48.572426  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:48.590996  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:49.072205  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:49.072316  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:49.085908  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:49.571496  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:49.571641  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:49.587609  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:46.683555  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:48.683848  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:47.463595  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.498282893s)
	I0108 22:16:47.463651  375556 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:47.494376  375556 kubeadm.go:787] kubelet initialised
	I0108 22:16:47.494409  375556 kubeadm.go:788] duration metric: took 30.746268ms waiting for restarted kubelet to initialise ...
	I0108 22:16:47.494419  375556 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:47.518711  375556 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:49.532387  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:47.854322  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:50.347325  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:52.349479  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:50.071318  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:50.071492  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:50.087514  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:50.572137  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:50.572248  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:50.586581  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.072060  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:51.072182  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:51.087008  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.571464  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:51.571586  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:51.585684  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:52.072246  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:52.072323  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:52.087689  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:52.572243  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:52.572347  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:52.587037  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:53.071470  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:53.071589  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:53.086911  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:53.571460  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:53.571553  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:53.586045  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:54.072236  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:54.072358  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:54.087701  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:54.572312  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:54.572446  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:54.587922  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.181229  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:53.182527  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:52.026615  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:54.027979  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:54.849162  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:57.346988  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:55.071292  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:55.071441  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:55.090623  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:55.572144  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:55.572231  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:55.587405  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:56.071926  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:56.072056  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:56.086264  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:56.571790  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:56.571930  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:56.586088  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:57.071438  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:57.071546  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:57.087310  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:57.571491  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:57.571640  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:57.585754  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:58.071604  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:58.071723  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:58.087027  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:58.087070  374880 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:58.087086  374880 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:58.087128  374880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:58.087206  374880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:58.137792  374880 cri.go:89] found id: ""
	I0108 22:16:58.137875  374880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:58.157140  374880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:58.171953  374880 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:58.172029  374880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:58.186287  374880 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:58.186325  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:58.316514  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.124691  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.386136  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.490503  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.609542  374880 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:59.609648  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:55.684783  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:58.189882  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:56.527144  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:58.529935  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:01.030202  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:59.350073  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:01.845861  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:00.109804  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:00.610728  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.110191  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.609754  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.638919  374880 api_server.go:72] duration metric: took 2.029378055s to wait for apiserver process to appear ...
	I0108 22:17:01.638952  374880 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:17:01.638975  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:00.681951  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:02.683028  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:04.685040  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:03.527242  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:05.527888  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:03.850211  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:06.350594  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:06.639278  374880 api_server.go:269] stopped: https://192.168.39.183:8443/healthz: Get "https://192.168.39.183:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 22:17:06.639347  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.110234  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:17:08.110269  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:17:08.110287  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.268403  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.268437  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:08.268451  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.300726  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.300787  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:08.639135  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.676558  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.676598  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:09.139592  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:09.151081  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:09.151120  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:09.639741  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:09.646812  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0108 22:17:09.656279  374880 api_server.go:141] control plane version: v1.16.0
	I0108 22:17:09.656318  374880 api_server.go:131] duration metric: took 8.017357804s to wait for apiserver health ...
	I0108 22:17:09.656333  374880 cni.go:84] Creating CNI manager for ""
	I0108 22:17:09.656342  374880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:17:09.658633  374880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:17:09.660081  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:17:09.670922  374880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:17:09.697148  374880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:17:09.710916  374880 system_pods.go:59] 7 kube-system pods found
	I0108 22:17:09.710958  374880 system_pods.go:61] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:09.710966  374880 system_pods.go:61] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:09.710974  374880 system_pods.go:61] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:09.710982  374880 system_pods.go:61] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Pending
	I0108 22:17:09.710988  374880 system_pods.go:61] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:09.710994  374880 system_pods.go:61] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:09.710999  374880 system_pods.go:61] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:09.711007  374880 system_pods.go:74] duration metric: took 13.819282ms to wait for pod list to return data ...
	I0108 22:17:09.711017  374880 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:17:09.717809  374880 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:17:09.717862  374880 node_conditions.go:123] node cpu capacity is 2
	I0108 22:17:09.717882  374880 node_conditions.go:105] duration metric: took 6.857808ms to run NodePressure ...
	I0108 22:17:09.717921  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:17:07.181980  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:09.182492  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:10.147851  374880 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:17:10.155593  374880 kubeadm.go:787] kubelet initialised
	I0108 22:17:10.155627  374880 kubeadm.go:788] duration metric: took 7.730921ms waiting for restarted kubelet to initialise ...
	I0108 22:17:10.155636  374880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:10.162330  374880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.173343  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.173384  374880 pod_ready.go:81] duration metric: took 11.015314ms waiting for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.173398  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.173408  374880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.181308  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "etcd-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.181354  374880 pod_ready.go:81] duration metric: took 7.925248ms waiting for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.181370  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "etcd-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.181382  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.201297  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.201340  374880 pod_ready.go:81] duration metric: took 19.943972ms waiting for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.201355  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.201364  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.212246  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.212303  374880 pod_ready.go:81] duration metric: took 10.921798ms waiting for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.212326  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.212337  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.554958  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-proxy-mfs65" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.554990  374880 pod_ready.go:81] duration metric: took 342.644311ms waiting for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.555000  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-proxy-mfs65" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.555014  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.952644  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.952690  374880 pod_ready.go:81] duration metric: took 397.663927ms waiting for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.952705  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.952721  374880 pod_ready.go:38] duration metric: took 797.073923ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:10.952756  374880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:17:10.966105  374880 ops.go:34] apiserver oom_adj: -16
	I0108 22:17:10.966142  374880 kubeadm.go:640] restartCluster took 22.925755113s
	I0108 22:17:10.966160  374880 kubeadm.go:406] StartCluster complete in 22.996305207s
	I0108 22:17:10.966183  374880 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:17:10.966269  374880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:17:10.968639  374880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:17:10.968991  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:17:10.969141  374880 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:17:10.969252  374880 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969268  374880 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969273  374880 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:17:10.969292  374880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-079759"
	I0108 22:17:10.969296  374880 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-079759"
	W0108 22:17:10.969314  374880 addons.go:246] addon metrics-server should already be in state true
	I0108 22:17:10.969351  374880 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969368  374880 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-079759"
	W0108 22:17:10.969375  374880 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:17:10.969393  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.969409  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.969785  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969823  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969832  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.969824  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969916  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.969926  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.990948  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0108 22:17:10.991126  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0108 22:17:10.991782  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:10.991979  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:10.992429  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:10.992473  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:10.992593  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:10.992618  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:10.992993  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:10.993076  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:10.993348  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:10.993741  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.993822  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.997882  374880 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-079759"
	W0108 22:17:10.997908  374880 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:17:10.997937  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.998375  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.998422  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.014704  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0108 22:17:11.015259  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.015412  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0108 22:17:11.016128  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.016160  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.016532  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.017165  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:11.017214  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.017521  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.018124  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.018140  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.018560  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.018854  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.018926  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0108 22:17:11.019671  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.020333  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.020353  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.020686  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.021353  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:11.021406  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.021696  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.024514  374880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:17:11.026172  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:17:11.026202  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:17:11.026238  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.031029  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.031951  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.031979  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.032327  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.032560  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.032709  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.032862  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.039130  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0108 22:17:11.039792  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.040408  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.040426  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.040821  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.041071  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.041764  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45497
	I0108 22:17:11.042444  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.042927  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.042952  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.043292  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.043498  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.043832  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.046099  374880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:17:07.529123  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:09.529950  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:11.048145  374880 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:17:11.048189  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:17:11.048231  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.045325  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.048952  374880 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:17:11.048976  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:17:11.049021  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.052466  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.052852  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.052891  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.053248  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.053542  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.053781  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.053964  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.062218  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.062324  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.062338  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.062363  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.063474  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.063729  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.063926  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.190657  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:17:11.190690  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:17:11.221757  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:17:11.254133  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:17:11.285976  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:17:11.286005  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:17:11.365594  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:17:11.365632  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:17:11.406494  374880 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 22:17:11.459160  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:17:11.475488  374880 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-079759" context rescaled to 1 replicas
	I0108 22:17:11.475557  374880 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:17:11.478952  374880 out.go:177] * Verifying Kubernetes components...
	I0108 22:17:11.480674  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:17:12.238037  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016231756s)
	I0108 22:17:12.238158  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.238178  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.238585  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.238616  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.238630  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.238640  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.238649  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.238928  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.238953  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.292897  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.292926  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.293228  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.293249  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.297621  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.043443256s)
	I0108 22:17:12.297697  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.297717  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.298050  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.298107  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.298121  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.298136  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.298151  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.298377  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.298434  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.298449  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.460391  374880 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-079759" to be "Ready" ...
	I0108 22:17:12.460519  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.001301389s)
	I0108 22:17:12.460578  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.460600  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.460930  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.460950  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.460970  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.460980  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.461238  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.461262  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.461278  374880 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-079759"
	I0108 22:17:12.461289  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.464523  374880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0108 22:17:08.848369  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:11.349358  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:12.466030  374880 addons.go:508] enable addons completed in 1.496887794s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0108 22:17:14.465035  374880 node_ready.go:58] node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:11.186335  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:13.680427  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:12.029896  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:14.527011  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:13.847034  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:16.348875  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:16.465852  374880 node_ready.go:58] node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:18.965439  374880 node_ready.go:49] node "old-k8s-version-079759" has status "Ready":"True"
	I0108 22:17:18.965487  374880 node_ready.go:38] duration metric: took 6.505055778s waiting for node "old-k8s-version-079759" to be "Ready" ...
	I0108 22:17:18.965512  374880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:18.972414  374880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.981201  374880 pod_ready.go:92] pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.981242  374880 pod_ready.go:81] duration metric: took 8.788084ms waiting for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.981258  374880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.987118  374880 pod_ready.go:92] pod "etcd-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.987147  374880 pod_ready.go:81] duration metric: took 5.880499ms waiting for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.987165  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.995928  374880 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.995972  374880 pod_ready.go:81] duration metric: took 8.795387ms waiting for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.995990  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.006241  374880 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.006273  374880 pod_ready.go:81] duration metric: took 10.274527ms waiting for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.006288  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.366551  374880 pod_ready.go:92] pod "kube-proxy-mfs65" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.366588  374880 pod_ready.go:81] duration metric: took 360.29132ms waiting for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.366607  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.766225  374880 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.766266  374880 pod_ready.go:81] duration metric: took 399.648483ms waiting for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.766287  374880 pod_ready.go:38] duration metric: took 800.758248ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:19.766317  374880 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:17:19.766407  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:19.790384  374880 api_server.go:72] duration metric: took 8.314784167s to wait for apiserver process to appear ...
	I0108 22:17:19.790417  374880 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:17:19.790442  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:15.682742  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:18.181808  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:19.813424  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0108 22:17:19.814615  374880 api_server.go:141] control plane version: v1.16.0
	I0108 22:17:19.814638  374880 api_server.go:131] duration metric: took 24.214441ms to wait for apiserver health ...
	I0108 22:17:19.814647  374880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:17:19.967792  374880 system_pods.go:59] 7 kube-system pods found
	I0108 22:17:19.967850  374880 system_pods.go:61] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:19.967858  374880 system_pods.go:61] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:19.967865  374880 system_pods.go:61] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:19.967871  374880 system_pods.go:61] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Running
	I0108 22:17:19.967875  374880 system_pods.go:61] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:19.967882  374880 system_pods.go:61] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:19.967896  374880 system_pods.go:61] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:19.967908  374880 system_pods.go:74] duration metric: took 153.252828ms to wait for pod list to return data ...
	I0108 22:17:19.967925  374880 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:17:20.166954  374880 default_sa.go:45] found service account: "default"
	I0108 22:17:20.166999  374880 default_sa.go:55] duration metric: took 199.059234ms for default service account to be created ...
	I0108 22:17:20.167013  374880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:17:20.367805  374880 system_pods.go:86] 7 kube-system pods found
	I0108 22:17:20.367843  374880 system_pods.go:89] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:20.367851  374880 system_pods.go:89] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:20.367878  374880 system_pods.go:89] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:20.367889  374880 system_pods.go:89] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Running
	I0108 22:17:20.367895  374880 system_pods.go:89] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:20.367901  374880 system_pods.go:89] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:20.367908  374880 system_pods.go:89] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:20.367917  374880 system_pods.go:126] duration metric: took 200.897828ms to wait for k8s-apps to be running ...
	I0108 22:17:20.367931  374880 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:17:20.368002  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:17:20.384736  374880 system_svc.go:56] duration metric: took 16.789711ms WaitForService to wait for kubelet.
	I0108 22:17:20.384777  374880 kubeadm.go:581] duration metric: took 8.909185454s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:17:20.384805  374880 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:17:20.566662  374880 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:17:20.566699  374880 node_conditions.go:123] node cpu capacity is 2
	I0108 22:17:20.566713  374880 node_conditions.go:105] duration metric: took 181.900804ms to run NodePressure ...
	I0108 22:17:20.566733  374880 start.go:228] waiting for startup goroutines ...
	I0108 22:17:20.566743  374880 start.go:233] waiting for cluster config update ...
	I0108 22:17:20.566758  374880 start.go:242] writing updated cluster config ...
	I0108 22:17:20.567148  374880 ssh_runner.go:195] Run: rm -f paused
	I0108 22:17:20.625096  374880 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0108 22:17:20.627497  374880 out.go:177] 
	W0108 22:17:20.629694  374880 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0108 22:17:20.631265  374880 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0108 22:17:20.632916  374880 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-079759" cluster and "default" namespace by default
	I0108 22:17:16.529078  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:19.030929  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:18.848535  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:20.848603  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:20.182275  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:22.183490  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:24.682561  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:21.528256  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:23.529114  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:26.027560  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:23.346430  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:25.348995  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.182420  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:29.183480  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.530319  375556 pod_ready.go:92] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.530347  375556 pod_ready.go:81] duration metric: took 40.011595743s waiting for pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.530357  375556 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.537548  375556 pod_ready.go:92] pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.537577  375556 pod_ready.go:81] duration metric: took 7.212322ms waiting for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.537588  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.549788  375556 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.549830  375556 pod_ready.go:81] duration metric: took 12.233749ms waiting for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.549845  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.558337  375556 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.558364  375556 pod_ready.go:81] duration metric: took 8.510648ms waiting for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.558375  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4xsp" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.568980  375556 pod_ready.go:92] pod "kube-proxy-f4xsp" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.569008  375556 pod_ready.go:81] duration metric: took 10.626925ms waiting for pod "kube-proxy-f4xsp" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.569018  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.924746  375556 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.924792  375556 pod_ready.go:81] duration metric: took 355.765575ms waiting for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.924810  375556 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:29.934031  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.846645  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:29.848666  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:32.347317  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:31.681795  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.183509  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:31.935866  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.434680  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.850409  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:37.348417  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:36.681720  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:39.187220  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:36.933398  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:38.937527  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:39.849140  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:42.348407  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:41.681963  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:44.183281  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:41.434499  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:43.438745  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:45.934532  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:44.846802  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:46.847285  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:46.683139  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:49.180610  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:47.942228  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:50.434779  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:49.346290  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:51.346592  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:51.181365  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:53.182147  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:52.435305  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:54.933017  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:53.347169  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:55.847921  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:55.680794  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:57.683942  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:59.684807  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:56.933676  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:59.433266  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:58.346863  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:00.351598  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:02.358340  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:02.183383  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:04.684356  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:01.438892  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:03.942882  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:04.845380  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:06.850561  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:07.182060  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:09.182524  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:06.433230  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:08.435570  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:10.933834  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:08.853139  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:11.345311  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:11.183083  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.185196  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.435974  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.934920  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.347243  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.350752  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.683154  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:18.183396  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:17.938857  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.434388  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:17.849663  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.349073  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.349854  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.183740  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.681755  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.938829  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:24.940050  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:24.845935  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:26.848602  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:25.182926  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:27.681471  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:27.433983  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:29.933179  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:29.348482  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:31.848768  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:30.182593  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:32.184633  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:34.684351  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:31.935920  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:34.432407  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:33.849853  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:36.347248  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:37.185296  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:39.683266  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:36.434742  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:38.935788  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:38.347422  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:40.847846  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:42.184271  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:44.191899  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:41.434194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:43.435816  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:45.436582  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:43.348144  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:45.850291  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:46.681976  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:48.684379  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:47.934501  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:50.432989  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:48.346408  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:50.348943  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:51.181865  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:53.182990  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:52.433070  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:54.442432  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:52.846607  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:54.850642  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:57.347230  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:55.681392  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:57.683410  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:56.932551  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:58.935585  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:59.348127  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:01.848981  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:00.183662  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:02.681392  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:04.683283  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:01.433125  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:03.433714  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:05.434985  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:03.849460  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:06.349541  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:07.182372  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:09.681196  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:07.935969  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:10.435837  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:08.847292  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:10.850261  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:11.681770  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:13.683390  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:12.439563  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:14.933378  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:13.347217  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:15.847524  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:16.181226  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:18.182271  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:16.936400  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:19.433956  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:18.347048  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:20.846947  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:20.182396  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:22.681453  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:24.682678  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:21.934747  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:23.935826  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:22.847819  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:24.847981  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:27.346372  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:27.181829  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:29.686277  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:26.433266  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:28.433601  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:30.435331  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:29.349171  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:31.848107  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:31.686784  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.181838  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:32.932383  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.933487  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.349446  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:36.845807  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:36.182711  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:38.183592  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:37.433841  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:39.440368  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:38.847000  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:40.849528  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:40.681394  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:42.681803  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:41.934279  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:44.433480  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:43.346283  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:45.849805  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:45.182604  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:47.183086  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:49.681891  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:46.934165  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:49.433592  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:48.346422  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:50.346711  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:52.347386  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:52.181241  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:54.184167  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:51.435757  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:53.932937  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:55.935076  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:54.847306  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:56.849761  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:56.681736  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:59.182156  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:58.433892  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:00.435066  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:59.348176  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:01.847094  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:01.682869  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.183165  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:02.934032  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.935393  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.347516  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:06.846388  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:06.681333  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:08.684291  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:07.436354  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:09.934776  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:08.849876  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.346794  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.184760  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.681471  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.935382  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.935718  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.347573  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:15.846434  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:15.684425  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:18.182489  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:16.435556  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:18.934238  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:17.847804  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:19.851620  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:22.347305  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:20.183538  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:21.174145  375205 pod_ready.go:81] duration metric: took 4m0.001134505s waiting for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" ...
	E0108 22:20:21.174196  375205 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:20:21.174225  375205 pod_ready.go:38] duration metric: took 4m11.09670924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:20:21.174739  375205 kubeadm.go:640] restartCluster took 4m32.919154523s
	W0108 22:20:21.174932  375205 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:20:21.175031  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:20:21.437480  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:23.437985  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:25.934631  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:24.847918  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:27.354150  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:28.434309  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:30.935564  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:29.845550  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:31.847597  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:32.338942  375293 pod_ready.go:81] duration metric: took 4m0.001163118s waiting for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" ...
	E0108 22:20:32.338972  375293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:20:32.338994  375293 pod_ready.go:38] duration metric: took 4m8.522193777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:20:32.339022  375293 kubeadm.go:640] restartCluster took 4m31.730992352s
	W0108 22:20:32.339087  375293 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:20:32.339116  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:20:32.935958  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:35.434816  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:36.302806  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.127706719s)
	I0108 22:20:36.302938  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:20:36.321621  375205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:20:36.334281  375205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:20:36.346671  375205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:20:36.346717  375205 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:20:36.614321  375205 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:20:37.936328  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:40.435692  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:42.933586  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:45.434194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:48.562754  375205 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0108 22:20:48.562854  375205 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:20:48.562933  375205 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:20:48.563069  375205 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:20:48.563228  375205 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:20:48.563339  375205 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:20:48.565241  375205 out.go:204]   - Generating certificates and keys ...
	I0108 22:20:48.565369  375205 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:20:48.565449  375205 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:20:48.565542  375205 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:20:48.565610  375205 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:20:48.565733  375205 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:20:48.565840  375205 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:20:48.565938  375205 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:20:48.566036  375205 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:20:48.566148  375205 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:20:48.566255  375205 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:20:48.566336  375205 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:20:48.566437  375205 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:20:48.566521  375205 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:20:48.566606  375205 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0108 22:20:48.566682  375205 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:20:48.566771  375205 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:20:48.566859  375205 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:20:48.566957  375205 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:20:48.567046  375205 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:20:48.569013  375205 out.go:204]   - Booting up control plane ...
	I0108 22:20:48.569130  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:20:48.569247  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:20:48.569353  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:20:48.569468  375205 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:20:48.569588  375205 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:20:48.569656  375205 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:20:48.569873  375205 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:20:48.569977  375205 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002900 seconds
	I0108 22:20:48.570115  375205 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:20:48.570289  375205 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:20:48.570372  375205 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:20:48.570558  375205 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-675668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:20:48.570648  375205 kubeadm.go:322] [bootstrap-token] Using token: t5purj.kqjcf0swk5rb5mxk
	I0108 22:20:48.572249  375205 out.go:204]   - Configuring RBAC rules ...
	I0108 22:20:48.572407  375205 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:20:48.572525  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:20:48.572698  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:20:48.572845  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:20:48.572985  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:20:48.573060  375205 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:20:48.573192  375205 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:20:48.573253  375205 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:20:48.573309  375205 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:20:48.573316  375205 kubeadm.go:322] 
	I0108 22:20:48.573365  375205 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:20:48.573372  375205 kubeadm.go:322] 
	I0108 22:20:48.573433  375205 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:20:48.573440  375205 kubeadm.go:322] 
	I0108 22:20:48.573466  375205 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:20:48.573516  375205 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:20:48.573559  375205 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:20:48.573565  375205 kubeadm.go:322] 
	I0108 22:20:48.573608  375205 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:20:48.573614  375205 kubeadm.go:322] 
	I0108 22:20:48.573656  375205 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:20:48.573663  375205 kubeadm.go:322] 
	I0108 22:20:48.573705  375205 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:20:48.573774  375205 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:20:48.573830  375205 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:20:48.573836  375205 kubeadm.go:322] 
	I0108 22:20:48.573902  375205 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:20:48.573968  375205 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:20:48.573974  375205 kubeadm.go:322] 
	I0108 22:20:48.574041  375205 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t5purj.kqjcf0swk5rb5mxk \
	I0108 22:20:48.574137  375205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:20:48.574168  375205 kubeadm.go:322] 	--control-plane 
	I0108 22:20:48.574179  375205 kubeadm.go:322] 
	I0108 22:20:48.574277  375205 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:20:48.574288  375205 kubeadm.go:322] 
	I0108 22:20:48.574369  375205 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t5purj.kqjcf0swk5rb5mxk \
	I0108 22:20:48.574510  375205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:20:48.574532  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:20:48.574545  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:20:48.576776  375205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:20:48.578238  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:20:48.605767  375205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:20:48.656602  375205 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:20:48.656700  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=no-preload-675668 minikube.k8s.io/updated_at=2024_01_08T22_20_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:48.656701  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:48.954525  375205 ops.go:34] apiserver oom_adj: -16
	I0108 22:20:48.954705  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:49.454907  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.014263  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (17.675119667s)
	I0108 22:20:50.014357  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:20:50.032616  375293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:20:50.046779  375293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:20:50.059243  375293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:20:50.059321  375293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:20:50.125341  375293 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:20:50.125427  375293 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:20:50.314274  375293 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:20:50.314692  375293 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:20:50.314859  375293 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:20:50.613241  375293 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:20:47.934671  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:50.435675  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:50.615123  375293 out.go:204]   - Generating certificates and keys ...
	I0108 22:20:50.615298  375293 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:20:50.615442  375293 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:20:50.615588  375293 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:20:50.615684  375293 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:20:50.615978  375293 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:20:50.616644  375293 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:20:50.617070  375293 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:20:50.617625  375293 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:20:50.618175  375293 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:20:50.618746  375293 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:20:50.619222  375293 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:20:50.619315  375293 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:20:50.750595  375293 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:20:50.925827  375293 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:20:51.210091  375293 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:20:51.341979  375293 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:20:51.342383  375293 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:20:51.346252  375293 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:20:51.348515  375293 out.go:204]   - Booting up control plane ...
	I0108 22:20:51.348656  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:20:51.349029  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:20:51.350374  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:20:51.368778  375293 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:20:51.370050  375293 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:20:51.370127  375293 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:20:51.533956  375293 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:20:49.955240  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.455461  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.954656  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:51.455494  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:51.954708  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.454966  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.955643  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:53.454696  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:53.955234  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:54.455436  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.934792  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:55.433713  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:54.955090  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:55.454594  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:55.954634  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:56.455479  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:56.954866  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.455465  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.954857  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:58.454611  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:58.955416  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:59.455690  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.434365  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:59.932616  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:01.038928  375293 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503619 seconds
	I0108 22:21:01.039086  375293 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:21:01.066204  375293 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:21:01.633859  375293 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:21:01.634073  375293 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-903819 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:21:02.161422  375293 kubeadm.go:322] [bootstrap-token] Using token: m5gf05.lf63ehk148mqhzsy
	I0108 22:20:59.954870  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:00.455632  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:00.954611  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:01.455512  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:01.955058  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.130771  375205 kubeadm.go:1088] duration metric: took 13.474145806s to wait for elevateKubeSystemPrivileges.
	I0108 22:21:02.130812  375205 kubeadm.go:406] StartCluster complete in 5m13.930335887s
	I0108 22:21:02.130872  375205 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:02.131052  375205 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:21:02.133316  375205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:02.133620  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:21:02.133769  375205 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:21:02.133851  375205 addons.go:69] Setting storage-provisioner=true in profile "no-preload-675668"
	I0108 22:21:02.133874  375205 addons.go:237] Setting addon storage-provisioner=true in "no-preload-675668"
	W0108 22:21:02.133885  375205 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:21:02.133902  375205 addons.go:69] Setting default-storageclass=true in profile "no-preload-675668"
	I0108 22:21:02.133931  375205 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-675668"
	I0108 22:21:02.133944  375205 addons.go:69] Setting metrics-server=true in profile "no-preload-675668"
	I0108 22:21:02.133960  375205 addons.go:237] Setting addon metrics-server=true in "no-preload-675668"
	W0108 22:21:02.133970  375205 addons.go:246] addon metrics-server should already be in state true
	I0108 22:21:02.134007  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.133934  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.134493  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134492  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134531  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.133882  375205 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:21:02.134595  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134626  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.134679  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.159537  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0108 22:21:02.159560  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0108 22:21:02.159658  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0108 22:21:02.160218  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160310  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160353  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160816  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160832  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.160837  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160856  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.160923  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160934  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.161384  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161384  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161436  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161578  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.162110  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.162156  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.163070  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.163111  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.166373  375205 addons.go:237] Setting addon default-storageclass=true in "no-preload-675668"
	W0108 22:21:02.166398  375205 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:21:02.166437  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.166793  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.166851  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.186248  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0108 22:21:02.186805  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.187689  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.187721  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.189657  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.189934  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0108 22:21:02.190139  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.190885  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.192512  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.192561  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.192883  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.193058  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.193793  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.193846  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.194831  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0108 22:21:02.197130  375205 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:21:02.195453  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.198890  375205 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:02.198908  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:21:02.198928  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.199474  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.199496  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.202159  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.202458  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.204081  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.204440  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.204470  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.204907  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.205095  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.206369  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.206382  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.208865  375205 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:21:02.207548  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.210754  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:21:02.210777  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:21:02.210806  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.215494  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.216525  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.216572  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.217020  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.217270  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.217433  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.217548  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.218155  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0108 22:21:02.219031  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.219589  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.219613  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.220024  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.220222  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.223150  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.223618  375205 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:02.223638  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:21:02.223662  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.227537  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.228321  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.228364  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.228729  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.228986  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.229244  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.229385  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.376102  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:02.442186  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:21:02.442220  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:21:02.463490  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:02.511966  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:21:02.512007  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:21:02.516771  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:21:02.645916  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:02.645958  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:21:02.693299  375205 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-675668" context rescaled to 1 replicas
	I0108 22:21:02.693524  375205 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.153 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:21:02.696133  375205 out.go:177] * Verifying Kubernetes components...
	I0108 22:21:02.163532  375293 out.go:204]   - Configuring RBAC rules ...
	I0108 22:21:02.163667  375293 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:21:02.202175  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:21:02.230273  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:21:02.239237  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:21:02.245892  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:21:02.262139  375293 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:21:02.282319  375293 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:21:02.634155  375293 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:21:02.712856  375293 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:21:02.712895  375293 kubeadm.go:322] 
	I0108 22:21:02.713004  375293 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:21:02.713029  375293 kubeadm.go:322] 
	I0108 22:21:02.713122  375293 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:21:02.713138  375293 kubeadm.go:322] 
	I0108 22:21:02.713175  375293 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:21:02.713243  375293 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:21:02.713342  375293 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:21:02.713367  375293 kubeadm.go:322] 
	I0108 22:21:02.713461  375293 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:21:02.713491  375293 kubeadm.go:322] 
	I0108 22:21:02.713571  375293 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:21:02.713582  375293 kubeadm.go:322] 
	I0108 22:21:02.713672  375293 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:21:02.713775  375293 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:21:02.713903  375293 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:21:02.713916  375293 kubeadm.go:322] 
	I0108 22:21:02.714019  375293 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:21:02.714118  375293 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:21:02.714132  375293 kubeadm.go:322] 
	I0108 22:21:02.714275  375293 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m5gf05.lf63ehk148mqhzsy \
	I0108 22:21:02.714404  375293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:21:02.714427  375293 kubeadm.go:322] 	--control-plane 
	I0108 22:21:02.714439  375293 kubeadm.go:322] 
	I0108 22:21:02.714524  375293 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:21:02.714533  375293 kubeadm.go:322] 
	I0108 22:21:02.714623  375293 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m5gf05.lf63ehk148mqhzsy \
	I0108 22:21:02.714748  375293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:21:02.715538  375293 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:21:02.715812  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:21:02.715830  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:21:02.717948  375293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:21:02.719376  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:21:02.757728  375293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:21:02.792630  375293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:21:02.792734  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.792736  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=embed-certs-903819 minikube.k8s.io/updated_at=2024_01_08T22_21_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.697938  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:02.989011  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:03.814186  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437994456s)
	I0108 22:21:03.814254  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814255  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.350714909s)
	I0108 22:21:03.814286  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.297474579s)
	I0108 22:21:03.814302  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814321  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814317  375205 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0108 22:21:03.814318  375205 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.116341471s)
	I0108 22:21:03.814267  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814391  375205 node_ready.go:35] waiting up to 6m0s for node "no-preload-675668" to be "Ready" ...
	I0108 22:21:03.814667  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.814692  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.814734  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.814742  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.814765  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814789  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814821  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.814855  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.814868  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814878  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814994  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.815008  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.816606  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.816639  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.816649  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.844508  375205 node_ready.go:49] node "no-preload-675668" has status "Ready":"True"
	I0108 22:21:03.844562  375205 node_ready.go:38] duration metric: took 30.150881ms waiting for node "no-preload-675668" to be "Ready" ...
	I0108 22:21:03.844582  375205 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:03.895674  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.895707  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.896169  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.896196  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.896243  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.916148  375205 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-q6x86" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:04.208779  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.219716131s)
	I0108 22:21:04.208834  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:04.208853  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:04.209240  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:04.209262  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:04.209275  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:04.209289  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:04.209564  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:04.209585  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:04.209599  375205 addons.go:473] Verifying addon metrics-server=true in "no-preload-675668"
	I0108 22:21:04.211402  375205 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 22:21:04.212659  375205 addons.go:508] enable addons completed in 2.078891102s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 22:21:01.934579  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:03.936076  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:05.936317  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:03.317224  375293 ops.go:34] apiserver oom_adj: -16
	I0108 22:21:03.317384  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:03.817786  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:04.318579  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:04.817664  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.317487  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.818475  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:06.318507  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:06.818090  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:07.318335  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.932344  375205 pod_ready.go:92] pod "coredns-76f75df574-q6x86" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.932389  375205 pod_ready.go:81] duration metric: took 2.016206796s waiting for pod "coredns-76f75df574-q6x86" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.932404  375205 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.941282  375205 pod_ready.go:92] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.941316  375205 pod_ready.go:81] duration metric: took 8.903771ms waiting for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.941331  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.950226  375205 pod_ready.go:92] pod "kube-apiserver-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.950258  375205 pod_ready.go:81] duration metric: took 8.918375ms waiting for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.950273  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.972742  375205 pod_ready.go:92] pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.972794  375205 pod_ready.go:81] duration metric: took 22.511438ms waiting for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.972816  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b2nx2" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:06.981190  375205 pod_ready.go:92] pod "kube-proxy-b2nx2" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:06.981214  375205 pod_ready.go:81] duration metric: took 1.008391493s waiting for pod "kube-proxy-b2nx2" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:06.981225  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:07.121313  375205 pod_ready.go:92] pod "kube-scheduler-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:07.121348  375205 pod_ready.go:81] duration metric: took 140.114425ms waiting for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:07.121363  375205 pod_ready.go:38] duration metric: took 3.276764424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:07.121385  375205 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:21:07.121458  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:21:07.138313  375205 api_server.go:72] duration metric: took 4.444721115s to wait for apiserver process to appear ...
	I0108 22:21:07.138352  375205 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:21:07.138384  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:21:07.145653  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 200:
	ok
	I0108 22:21:07.148112  375205 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:21:07.148146  375205 api_server.go:131] duration metric: took 9.785033ms to wait for apiserver health ...
	I0108 22:21:07.148158  375205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:21:07.325218  375205 system_pods.go:59] 8 kube-system pods found
	I0108 22:21:07.325263  375205 system_pods.go:61] "coredns-76f75df574-q6x86" [6cad2e0f-a7af-453d-9eaf-55b56e41e27b] Running
	I0108 22:21:07.325268  375205 system_pods.go:61] "etcd-no-preload-675668" [cd434699-162a-4b04-853d-94dbb1254279] Running
	I0108 22:21:07.325273  375205 system_pods.go:61] "kube-apiserver-no-preload-675668" [d22859b8-f451-40b8-85d7-7f3d548b1af1] Running
	I0108 22:21:07.325279  375205 system_pods.go:61] "kube-controller-manager-no-preload-675668" [8b52fdfe-124a-4d08-b66b-41f1b051fe95] Running
	I0108 22:21:07.325283  375205 system_pods.go:61] "kube-proxy-b2nx2" [b6106f11-9345-4915-b7cc-d2671a7c4e72] Running
	I0108 22:21:07.325287  375205 system_pods.go:61] "kube-scheduler-no-preload-675668" [83562817-27bf-4265-88f0-3dad667687c5] Running
	I0108 22:21:07.325296  375205 system_pods.go:61] "metrics-server-57f55c9bc5-vb2kj" [45489720-2506-46fa-8833-02cbae6f122b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:21:07.325305  375205 system_pods.go:61] "storage-provisioner" [a1c64608-a169-455b-a5e9-0ecb4161432c] Running
	I0108 22:21:07.325323  375205 system_pods.go:74] duration metric: took 177.156331ms to wait for pod list to return data ...
	I0108 22:21:07.325337  375205 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:21:07.521751  375205 default_sa.go:45] found service account: "default"
	I0108 22:21:07.521796  375205 default_sa.go:55] duration metric: took 196.444982ms for default service account to be created ...
	I0108 22:21:07.521809  375205 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:21:07.725848  375205 system_pods.go:86] 8 kube-system pods found
	I0108 22:21:07.725888  375205 system_pods.go:89] "coredns-76f75df574-q6x86" [6cad2e0f-a7af-453d-9eaf-55b56e41e27b] Running
	I0108 22:21:07.725894  375205 system_pods.go:89] "etcd-no-preload-675668" [cd434699-162a-4b04-853d-94dbb1254279] Running
	I0108 22:21:07.725899  375205 system_pods.go:89] "kube-apiserver-no-preload-675668" [d22859b8-f451-40b8-85d7-7f3d548b1af1] Running
	I0108 22:21:07.725904  375205 system_pods.go:89] "kube-controller-manager-no-preload-675668" [8b52fdfe-124a-4d08-b66b-41f1b051fe95] Running
	I0108 22:21:07.725908  375205 system_pods.go:89] "kube-proxy-b2nx2" [b6106f11-9345-4915-b7cc-d2671a7c4e72] Running
	I0108 22:21:07.725913  375205 system_pods.go:89] "kube-scheduler-no-preload-675668" [83562817-27bf-4265-88f0-3dad667687c5] Running
	I0108 22:21:07.725920  375205 system_pods.go:89] "metrics-server-57f55c9bc5-vb2kj" [45489720-2506-46fa-8833-02cbae6f122b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:21:07.725926  375205 system_pods.go:89] "storage-provisioner" [a1c64608-a169-455b-a5e9-0ecb4161432c] Running
	I0108 22:21:07.725937  375205 system_pods.go:126] duration metric: took 204.121913ms to wait for k8s-apps to be running ...
	I0108 22:21:07.725946  375205 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:21:07.726014  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:07.745719  375205 system_svc.go:56] duration metric: took 19.7558ms WaitForService to wait for kubelet.
	I0108 22:21:07.745762  375205 kubeadm.go:581] duration metric: took 5.052181219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:21:07.745787  375205 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:21:07.923051  375205 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:21:07.923108  375205 node_conditions.go:123] node cpu capacity is 2
	I0108 22:21:07.923124  375205 node_conditions.go:105] duration metric: took 177.330669ms to run NodePressure ...
	I0108 22:21:07.923140  375205 start.go:228] waiting for startup goroutines ...
	I0108 22:21:07.923150  375205 start.go:233] waiting for cluster config update ...
	I0108 22:21:07.923164  375205 start.go:242] writing updated cluster config ...
	I0108 22:21:07.923585  375205 ssh_runner.go:195] Run: rm -f paused
	I0108 22:21:07.985436  375205 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0108 22:21:07.987522  375205 out.go:177] * Done! kubectl is now configured to use "no-preload-675668" cluster and "default" namespace by default
	I0108 22:21:07.936490  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:10.434333  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:07.817734  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:08.318472  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:08.818320  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:09.317791  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:09.818298  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:10.317739  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:10.818233  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:11.317545  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:11.818344  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:12.317620  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:12.817911  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:13.317976  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:13.817670  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:14.317747  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:14.817596  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:15.318339  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:15.465438  375293 kubeadm.go:1088] duration metric: took 12.672788245s to wait for elevateKubeSystemPrivileges.
	I0108 22:21:15.465476  375293 kubeadm.go:406] StartCluster complete in 5m14.917822837s
	I0108 22:21:15.465503  375293 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:15.465612  375293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:21:15.468437  375293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:15.468772  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:21:15.468921  375293 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:21:15.469008  375293 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-903819"
	I0108 22:21:15.469017  375293 addons.go:69] Setting default-storageclass=true in profile "embed-certs-903819"
	I0108 22:21:15.469036  375293 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-903819"
	I0108 22:21:15.469052  375293 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 22:21:15.469064  375293 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:21:15.469060  375293 addons.go:69] Setting metrics-server=true in profile "embed-certs-903819"
	I0108 22:21:15.469037  375293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-903819"
	I0108 22:21:15.469111  375293 addons.go:237] Setting addon metrics-server=true in "embed-certs-903819"
	W0108 22:21:15.469128  375293 addons.go:246] addon metrics-server should already be in state true
	I0108 22:21:15.469139  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.469189  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.469584  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469635  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469676  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.469647  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.469585  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469825  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.488818  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0108 22:21:15.489266  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.491196  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39101
	I0108 22:21:15.491253  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0108 22:21:15.491759  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.491787  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.491816  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.492193  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.492365  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.492383  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.492747  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.492790  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.493002  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.493056  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.493670  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.493702  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.494305  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.494329  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.494841  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.495072  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.499830  375293 addons.go:237] Setting addon default-storageclass=true in "embed-certs-903819"
	W0108 22:21:15.499867  375293 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:21:15.499903  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.500396  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.500568  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.516135  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0108 22:21:15.516748  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.517517  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.517566  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.518117  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.518378  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.519282  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0108 22:21:15.520505  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.520596  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.522491  375293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:21:15.521662  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.524042  375293 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:15.524051  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.524059  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:21:15.524081  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.524560  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.524774  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.527237  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.529443  375293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:21:15.528147  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.528787  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.531192  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:21:15.531217  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:21:15.531249  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.531217  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.531343  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.531599  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.531825  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.532078  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.535903  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0108 22:21:15.536161  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.536527  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.536553  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.536618  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.536766  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.536994  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.537194  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.537359  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.537370  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.537426  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.537948  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.538486  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.538508  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.557562  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0108 22:21:15.558072  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.558613  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.558643  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.559096  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.559318  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.561435  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.561769  375293 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:15.561788  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:21:15.561809  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.564959  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.565410  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.565442  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.565628  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.565836  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.565994  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.566145  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.740070  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:21:15.740112  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:21:15.762954  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:15.779320  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:15.819423  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:21:15.821997  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:21:15.822039  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:21:15.911195  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:15.911231  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:21:16.022419  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:16.061550  375293 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-903819" context rescaled to 1 replicas
	I0108 22:21:16.061625  375293 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:21:16.063813  375293 out.go:177] * Verifying Kubernetes components...
	I0108 22:21:12.435066  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:14.936374  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:16.065433  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:17.600634  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.837630321s)
	I0108 22:21:17.600727  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.600751  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.601111  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.601133  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:17.601145  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.601155  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.601162  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.601437  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.601478  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.601496  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:17.658136  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.658160  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.658512  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.658539  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.658556  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.633155  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.813676374s)
	I0108 22:21:18.633329  375293 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0108 22:21:18.633460  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.610999344s)
	I0108 22:21:18.633535  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.633576  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.633728  375293 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.568262314s)
	I0108 22:21:18.633793  375293 node_ready.go:35] waiting up to 6m0s for node "embed-certs-903819" to be "Ready" ...
	I0108 22:21:18.634123  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.634212  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.634247  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.634274  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.634293  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.634767  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.634836  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.634875  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.634901  375293 addons.go:473] Verifying addon metrics-server=true in "embed-certs-903819"
	I0108 22:21:18.638741  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.85936832s)
	I0108 22:21:18.638810  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.638826  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.639227  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.639301  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.639322  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.639333  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.639353  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.639611  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.639643  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.639652  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.641291  375293 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0108 22:21:17.433629  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:19.436354  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:18.642785  375293 addons.go:508] enable addons completed in 3.173862498s: enabled=[default-storageclass metrics-server storage-provisioner]
	I0108 22:21:18.710469  375293 node_ready.go:49] node "embed-certs-903819" has status "Ready":"True"
	I0108 22:21:18.710510  375293 node_ready.go:38] duration metric: took 76.686364ms waiting for node "embed-certs-903819" to be "Ready" ...
	I0108 22:21:18.710526  375293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:18.737405  375293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.747084  375293 pod_ready.go:92] pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.747120  375293 pod_ready.go:81] duration metric: took 1.009672279s waiting for pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.747136  375293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.758191  375293 pod_ready.go:92] pod "etcd-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.758217  375293 pod_ready.go:81] duration metric: took 11.073973ms waiting for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.758227  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.770167  375293 pod_ready.go:92] pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.770199  375293 pod_ready.go:81] duration metric: took 11.962809ms waiting for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.770213  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.778549  375293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.778576  375293 pod_ready.go:81] duration metric: took 8.355574ms waiting for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.778593  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqj9b" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.291841  375293 pod_ready.go:92] pod "kube-proxy-hqj9b" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:20.291889  375293 pod_ready.go:81] duration metric: took 513.287335ms waiting for pod "kube-proxy-hqj9b" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.291907  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.639437  375293 pod_ready.go:92] pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:20.639482  375293 pod_ready.go:81] duration metric: took 347.563689ms waiting for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.639507  375293 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:22.648411  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:21.933418  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:24.435043  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:25.150951  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:27.650444  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:26.937451  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:27.925059  375556 pod_ready.go:81] duration metric: took 4m0.000207907s waiting for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" ...
	E0108 22:21:27.925103  375556 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:21:27.925128  375556 pod_ready.go:38] duration metric: took 4m40.430696194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:27.925167  375556 kubeadm.go:640] restartCluster took 5m4.814420494s
	W0108 22:21:27.925297  375556 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:21:27.925360  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:21:30.149112  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:32.149588  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:34.150894  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:36.649733  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:39.151257  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:41.647739  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:43.145693  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.220300874s)
	I0108 22:21:43.145789  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:43.162489  375556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:21:43.174147  375556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:21:43.184922  375556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:21:43.184985  375556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:21:43.249215  375556 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:21:43.249349  375556 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:21:43.441703  375556 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:21:43.441851  375556 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:21:43.441998  375556 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:21:43.739390  375556 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:21:43.742109  375556 out.go:204]   - Generating certificates and keys ...
	I0108 22:21:43.742213  375556 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:21:43.742298  375556 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:21:43.742469  375556 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:21:43.742561  375556 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:21:43.742651  375556 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:21:43.743428  375556 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:21:43.744699  375556 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:21:43.746015  375556 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:21:43.747206  375556 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:21:43.748318  375556 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:21:43.749156  375556 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:21:43.749237  375556 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:21:43.859844  375556 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:21:44.418300  375556 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:21:44.582066  375556 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:21:44.829395  375556 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:21:44.830276  375556 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:21:44.833494  375556 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:21:44.835724  375556 out.go:204]   - Booting up control plane ...
	I0108 22:21:44.835871  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:21:44.835997  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:21:44.836115  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:21:44.858575  375556 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:21:44.859658  375556 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:21:44.859774  375556 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:21:45.004925  375556 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:21:43.648821  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:46.148491  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:48.152137  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:50.649779  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:54.508960  375556 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503706 seconds
	I0108 22:21:54.509100  375556 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:21:54.534526  375556 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:21:55.088263  375556 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:21:55.088497  375556 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-292054 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:21:55.625246  375556 kubeadm.go:322] [bootstrap-token] Using token: ca3oft.99pjh791kq903kea
	I0108 22:21:55.627406  375556 out.go:204]   - Configuring RBAC rules ...
	I0108 22:21:55.627535  375556 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:21:55.635469  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:21:55.658589  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:21:55.664394  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:21:55.670923  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:21:55.678315  375556 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:21:55.707544  375556 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:21:56.011289  375556 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:21:56.074068  375556 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:21:56.074122  375556 kubeadm.go:322] 
	I0108 22:21:56.074195  375556 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:21:56.074210  375556 kubeadm.go:322] 
	I0108 22:21:56.074305  375556 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:21:56.074315  375556 kubeadm.go:322] 
	I0108 22:21:56.074346  375556 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:21:56.074474  375556 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:21:56.074550  375556 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:21:56.074560  375556 kubeadm.go:322] 
	I0108 22:21:56.074635  375556 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:21:56.074649  375556 kubeadm.go:322] 
	I0108 22:21:56.074713  375556 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:21:56.074723  375556 kubeadm.go:322] 
	I0108 22:21:56.074810  375556 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:21:56.074933  375556 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:21:56.075027  375556 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:21:56.075037  375556 kubeadm.go:322] 
	I0108 22:21:56.075161  375556 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:21:56.075285  375556 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:21:56.075295  375556 kubeadm.go:322] 
	I0108 22:21:56.075430  375556 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ca3oft.99pjh791kq903kea \
	I0108 22:21:56.075574  375556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:21:56.075612  375556 kubeadm.go:322] 	--control-plane 
	I0108 22:21:56.075621  375556 kubeadm.go:322] 
	I0108 22:21:56.075733  375556 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:21:56.075744  375556 kubeadm.go:322] 
	I0108 22:21:56.075843  375556 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ca3oft.99pjh791kq903kea \
	I0108 22:21:56.075969  375556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:21:56.076235  375556 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:21:56.076281  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:21:56.076299  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:21:56.078385  375556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:21:56.079942  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:21:53.149618  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:55.649585  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:57.650103  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:56.112245  375556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:21:56.183435  375556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:21:56.183568  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:56.183570  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=default-k8s-diff-port-292054 minikube.k8s.io/updated_at=2024_01_08T22_21_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:56.217296  375556 ops.go:34] apiserver oom_adj: -16
	I0108 22:21:56.721884  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:57.222982  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:57.722219  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:58.222712  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:58.722544  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:59.222082  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:59.722808  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.222562  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.722284  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.149913  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:02.650967  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:01.222401  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:01.722606  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:02.222313  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:02.722582  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:03.222793  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:03.722359  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:04.222245  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:04.722706  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.222841  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.722871  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.148941  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:07.149461  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:06.222648  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:06.722581  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:07.222288  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:07.722274  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.222744  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.722856  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.963467  375556 kubeadm.go:1088] duration metric: took 12.779973028s to wait for elevateKubeSystemPrivileges.
	I0108 22:22:08.963522  375556 kubeadm.go:406] StartCluster complete in 5m45.912753673s
	I0108 22:22:08.963553  375556 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:22:08.963665  375556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:22:08.966435  375556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:22:08.966775  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:22:08.966928  375556 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:22:08.967034  375556 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967075  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:22:08.967095  375556 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.967104  375556 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:22:08.967152  375556 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967183  375556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-292054"
	I0108 22:22:08.967192  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.967271  375556 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967300  375556 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.967310  375556 addons.go:246] addon metrics-server should already be in state true
	I0108 22:22:08.967375  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.967667  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967695  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.967756  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967769  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967779  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.967796  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.986925  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0108 22:22:08.987023  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0108 22:22:08.987549  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.987698  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.988282  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.988313  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.988483  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.988508  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.988606  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0108 22:22:08.989056  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.989111  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.989337  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:08.989834  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.989872  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.990158  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.990780  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.990796  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.991245  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.991880  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.991911  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.995239  375556 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.995265  375556 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:22:08.995290  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.995820  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.995865  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:09.011939  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0108 22:22:09.012468  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.013299  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.013318  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.013724  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.013935  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I0108 22:22:09.014168  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.014906  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.015481  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.015498  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.015842  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.016396  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:09.016424  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:09.016659  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.016741  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I0108 22:22:09.019481  375556 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:22:09.017701  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.021632  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:22:09.021669  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:22:09.021704  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.022354  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.022387  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.022852  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.023158  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.025362  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.027347  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.029567  375556 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:22:09.027877  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.028367  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.032055  375556 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:22:09.032070  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:22:09.032103  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.032160  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.032368  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.032489  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.032591  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.037266  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.037969  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.038003  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.038588  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.038650  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0108 22:22:09.038933  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.039112  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.039299  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.039313  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.039936  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.039974  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.040395  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.040652  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.042584  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.043735  375556 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:22:09.043754  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:22:09.043774  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.047511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.047647  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.047668  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.047828  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.048115  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.048267  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.048432  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.273503  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:22:09.286359  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:22:09.286398  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:22:09.395127  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:22:09.395521  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:22:09.399318  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:22:09.399351  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:22:09.529413  375556 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-292054" context rescaled to 1 replicas
	I0108 22:22:09.529456  375556 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:22:09.531970  375556 out.go:177] * Verifying Kubernetes components...
	I0108 22:22:09.533935  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:22:09.608669  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:22:09.608706  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:22:09.762095  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:22:11.642700  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369133486s)
	I0108 22:22:11.642752  375556 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0108 22:22:12.525251  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.130061811s)
	I0108 22:22:12.525333  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525335  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.129764757s)
	I0108 22:22:12.525352  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.525383  375556 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.99138928s)
	I0108 22:22:12.525439  375556 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-292054" to be "Ready" ...
	I0108 22:22:12.525390  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.525785  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.525799  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.525810  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525820  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.526200  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526208  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526224  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.526234  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.526244  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.526250  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526320  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526345  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.526627  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526640  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526644  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.600599  375556 node_ready.go:49] node "default-k8s-diff-port-292054" has status "Ready":"True"
	I0108 22:22:12.600630  375556 node_ready.go:38] duration metric: took 75.170013ms waiting for node "default-k8s-diff-port-292054" to be "Ready" ...
	I0108 22:22:12.600642  375556 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:22:12.607695  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.607735  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.608178  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.608205  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.698479  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.93630517s)
	I0108 22:22:12.698597  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.698624  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.699090  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.699114  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.699129  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.699141  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.699570  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.699611  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.699628  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.699642  375556 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-292054"
	I0108 22:22:12.702579  375556 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 22:22:09.152248  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:11.649021  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:12.704051  375556 addons.go:508] enable addons completed in 3.737129591s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 22:22:12.730733  375556 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.740214  375556 pod_ready.go:92] pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.740241  375556 pod_ready.go:81] duration metric: took 1.009466865s waiting for pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.740252  375556 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.749855  375556 pod_ready.go:92] pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.749884  375556 pod_ready.go:81] duration metric: took 9.624914ms waiting for pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.749897  375556 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.774037  375556 pod_ready.go:92] pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.774082  375556 pod_ready.go:81] duration metric: took 24.173765ms waiting for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.774099  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.793737  375556 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.793763  375556 pod_ready.go:81] duration metric: took 19.654354ms waiting for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.793786  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.802646  375556 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.802675  375556 pod_ready.go:81] duration metric: took 8.880262ms waiting for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.802686  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bwmkb" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:14.935671  375556 pod_ready.go:92] pod "kube-proxy-bwmkb" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:14.935701  375556 pod_ready.go:81] duration metric: took 1.133008415s waiting for pod "kube-proxy-bwmkb" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:14.935712  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:15.337751  375556 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:15.337785  375556 pod_ready.go:81] duration metric: took 402.065003ms waiting for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:15.337799  375556 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.651032  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:16.150676  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:17.347997  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:19.848727  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:18.651581  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:21.153888  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:22.348002  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:24.348563  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:23.159095  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:25.648575  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:27.650462  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:26.847900  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:28.848176  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:30.148277  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:32.148917  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:31.353639  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:33.847750  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:34.649869  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:36.650396  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:36.349185  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:38.846642  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:40.851501  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:39.148741  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:41.150479  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:43.348737  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:45.848448  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:43.649911  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:46.149760  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:48.348731  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:50.849503  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:48.648402  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:50.649986  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:53.349307  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:55.349864  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:53.152397  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:55.651270  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:57.652287  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:57.854209  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:00.347211  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:59.655447  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:02.151802  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:02.351659  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:04.848930  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:04.650649  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:07.148845  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:06.864466  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:09.349319  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:09.150267  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:11.647897  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:11.350470  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:13.846976  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:13.648246  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:15.653072  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:16.348755  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:18.847624  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:20.850947  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:18.147230  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:20.148799  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:22.150181  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:22.854027  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:25.347172  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:24.648528  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:26.650104  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:27.350880  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:29.847065  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:28.651914  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:31.149983  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:31.849609  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:33.849918  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:35.852770  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:33.648054  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:35.650693  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:38.346376  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:40.347831  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:38.148131  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:40.149293  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:42.151041  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:42.845779  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:44.849417  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:44.655548  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:47.150423  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:46.850811  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:49.347304  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:49.652923  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:52.149820  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:51.348180  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:53.846474  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:55.847511  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:54.649820  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:57.149372  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:57.849233  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:00.348798  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:59.154056  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:01.649087  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:02.349247  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:04.350582  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:03.650176  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:06.153560  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:06.848567  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:09.349670  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:08.649461  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:11.149266  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:11.847194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:13.847282  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:15.849466  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:13.650152  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:15.653477  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:17.849683  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:20.348186  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:18.150536  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:20.650961  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:22.849232  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:25.349020  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:23.149893  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:25.151776  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:27.649498  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:27.848253  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:29.849644  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:29.651074  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:32.151463  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:32.348246  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:34.349140  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:34.650582  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:36.651676  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:36.848220  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:38.848664  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:40.848971  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:39.152183  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:41.648320  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:42.849338  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:45.347960  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:44.150739  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:46.649332  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:47.350030  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:49.847947  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:48.650293  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:50.650602  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:52.344857  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:54.347419  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:53.149776  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:55.150342  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:57.648269  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:56.347866  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:58.350081  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:00.848175  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:59.650591  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:02.149598  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:03.349797  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:05.849888  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:04.648771  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:06.651847  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:08.346160  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:10.348673  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:09.149033  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:11.149301  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:12.352279  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:14.846849  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:13.153318  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:15.651109  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:16.849657  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:19.347996  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:18.150751  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:20.650211  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:20.650242  375293 pod_ready.go:81] duration metric: took 4m0.010726332s waiting for pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace to be "Ready" ...
	E0108 22:25:20.650252  375293 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 22:25:20.650259  375293 pod_ready.go:38] duration metric: took 4m1.939720475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:25:20.650300  375293 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:25:20.650336  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:20.650406  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:20.714451  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:20.714500  375293 cri.go:89] found id: ""
	I0108 22:25:20.714513  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:20.714621  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.720237  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:20.720367  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:20.767857  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:20.767904  375293 cri.go:89] found id: ""
	I0108 22:25:20.767916  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:20.767995  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.772859  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:20.772969  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:20.817193  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:20.817225  375293 cri.go:89] found id: ""
	I0108 22:25:20.817236  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:20.817310  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.824003  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:20.824113  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:20.884204  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:20.884252  375293 cri.go:89] found id: ""
	I0108 22:25:20.884263  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:20.884335  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.889658  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:20.889756  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:20.949423  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:20.949460  375293 cri.go:89] found id: ""
	I0108 22:25:20.949472  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:20.949543  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.954856  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:20.954944  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:21.011490  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:21.011538  375293 cri.go:89] found id: ""
	I0108 22:25:21.011551  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:21.011629  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:21.017544  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:21.017638  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:21.066267  375293 cri.go:89] found id: ""
	I0108 22:25:21.066310  375293 logs.go:284] 0 containers: []
	W0108 22:25:21.066322  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:21.066331  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:21.066404  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:21.123537  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:21.123571  375293 cri.go:89] found id: ""
	I0108 22:25:21.123583  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:21.123660  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:21.129269  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:21.129309  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:21.200266  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:21.200308  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:21.246669  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:21.246705  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:21.265861  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:21.265908  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:21.327968  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:21.328016  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:21.386940  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:21.386986  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:21.443896  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:21.443941  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:21.496699  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:21.496746  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:21.962773  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:21.962820  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:22.024288  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:22.024330  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:22.133928  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:22.133976  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:22.301006  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:22.301051  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:21.348655  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:23.350759  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:25.351301  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:24.847470  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:25:24.867718  375293 api_server.go:72] duration metric: took 4m8.80605206s to wait for apiserver process to appear ...
	I0108 22:25:24.867750  375293 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:25:24.867788  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:24.867842  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:24.918048  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:24.918090  375293 cri.go:89] found id: ""
	I0108 22:25:24.918104  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:24.918196  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:24.923984  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:24.924096  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:24.981033  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:24.981058  375293 cri.go:89] found id: ""
	I0108 22:25:24.981066  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:24.981116  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:24.985729  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:24.985802  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:25.038522  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:25.038558  375293 cri.go:89] found id: ""
	I0108 22:25:25.038570  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:25.038637  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.043106  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:25.043218  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:25.100189  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:25.100218  375293 cri.go:89] found id: ""
	I0108 22:25:25.100230  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:25.100298  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.107135  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:25.107252  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:25.155243  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:25.155276  375293 cri.go:89] found id: ""
	I0108 22:25:25.155288  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:25.155354  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.160457  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:25.160559  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:25.214754  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:25.214788  375293 cri.go:89] found id: ""
	I0108 22:25:25.214799  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:25.214855  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.219504  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:25.219595  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:25.267255  375293 cri.go:89] found id: ""
	I0108 22:25:25.267302  375293 logs.go:284] 0 containers: []
	W0108 22:25:25.267318  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:25.267329  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:25.267442  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:25.322636  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:25.322668  375293 cri.go:89] found id: ""
	I0108 22:25:25.322679  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:25.322750  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.327559  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:25.327592  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:25.396299  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:25.396354  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:25.447121  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:25.447188  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:25.501357  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:25.501413  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:25.572678  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:25.572741  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:25.624203  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:25.624248  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:26.021189  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:26.021250  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:26.122845  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:26.122893  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:26.297704  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:26.297746  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:26.361771  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:26.361826  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:26.422252  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:26.422292  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:26.479602  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:26.479641  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:27.848906  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:30.348452  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:28.997002  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:25:29.008040  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I0108 22:25:29.009729  375293 api_server.go:141] control plane version: v1.28.4
	I0108 22:25:29.009758  375293 api_server.go:131] duration metric: took 4.142001296s to wait for apiserver health ...
	I0108 22:25:29.009770  375293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:25:29.009807  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:29.009872  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:29.064244  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:29.064280  375293 cri.go:89] found id: ""
	I0108 22:25:29.064292  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:29.064357  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.069801  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:29.069900  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:29.115294  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:29.115328  375293 cri.go:89] found id: ""
	I0108 22:25:29.115338  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:29.115426  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.120512  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:29.120600  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:29.173571  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:29.173600  375293 cri.go:89] found id: ""
	I0108 22:25:29.173609  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:29.173670  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.179649  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:29.179724  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:29.230220  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:29.230272  375293 cri.go:89] found id: ""
	I0108 22:25:29.230286  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:29.230384  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.235437  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:29.235540  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:29.280861  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:29.280892  375293 cri.go:89] found id: ""
	I0108 22:25:29.280904  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:29.280974  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.286131  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:29.286247  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:29.337665  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:29.337700  375293 cri.go:89] found id: ""
	I0108 22:25:29.337711  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:29.337765  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.343912  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:29.344009  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:29.400428  375293 cri.go:89] found id: ""
	I0108 22:25:29.400458  375293 logs.go:284] 0 containers: []
	W0108 22:25:29.400466  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:29.400476  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:29.400532  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:29.458375  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:29.458416  375293 cri.go:89] found id: ""
	I0108 22:25:29.458428  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:29.458503  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.464513  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:29.464555  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:29.809503  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:29.809550  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:29.916786  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:29.916864  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:30.077876  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:30.077929  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:30.139380  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:30.139445  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:30.186829  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:30.186861  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:30.244185  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:30.244230  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:30.300429  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:30.300488  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:30.316880  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:30.316920  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:30.370537  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:30.370581  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:30.419043  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:30.419093  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:30.482758  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:30.482804  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:33.043083  375293 system_pods.go:59] 8 kube-system pods found
	I0108 22:25:33.043134  375293 system_pods.go:61] "coredns-5dd5756b68-jbz6n" [562faf84-b986-4f0e-97cd-41aa5ac7ea17] Running
	I0108 22:25:33.043139  375293 system_pods.go:61] "etcd-embed-certs-903819" [68146164-7115-4489-8010-32774433564a] Running
	I0108 22:25:33.043143  375293 system_pods.go:61] "kube-apiserver-embed-certs-903819" [367d0612-bd4d-448f-84f2-118afcb9d095] Running
	I0108 22:25:33.043148  375293 system_pods.go:61] "kube-controller-manager-embed-certs-903819" [43c3944a-3dfd-44ce-ba68-baebbced4406] Running
	I0108 22:25:33.043152  375293 system_pods.go:61] "kube-proxy-hqj9b" [14b3f3bd-1d65-4382-adc2-09344b54463d] Running
	I0108 22:25:33.043157  375293 system_pods.go:61] "kube-scheduler-embed-certs-903819" [9c004a9c-c77a-4ee5-970d-db41ddc26439] Running
	I0108 22:25:33.043167  375293 system_pods.go:61] "metrics-server-57f55c9bc5-qhjlv" [f1bff39b-c944-4de0-a5b8-eb239e91c6db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:25:33.043172  375293 system_pods.go:61] "storage-provisioner" [949c6275-6836-4035-89f5-f2d2c2caaa89] Running
	I0108 22:25:33.043180  375293 system_pods.go:74] duration metric: took 4.033402969s to wait for pod list to return data ...
	I0108 22:25:33.043189  375293 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:25:33.047488  375293 default_sa.go:45] found service account: "default"
	I0108 22:25:33.047526  375293 default_sa.go:55] duration metric: took 4.328925ms for default service account to be created ...
	I0108 22:25:33.047540  375293 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:25:33.055793  375293 system_pods.go:86] 8 kube-system pods found
	I0108 22:25:33.055824  375293 system_pods.go:89] "coredns-5dd5756b68-jbz6n" [562faf84-b986-4f0e-97cd-41aa5ac7ea17] Running
	I0108 22:25:33.055829  375293 system_pods.go:89] "etcd-embed-certs-903819" [68146164-7115-4489-8010-32774433564a] Running
	I0108 22:25:33.055834  375293 system_pods.go:89] "kube-apiserver-embed-certs-903819" [367d0612-bd4d-448f-84f2-118afcb9d095] Running
	I0108 22:25:33.055838  375293 system_pods.go:89] "kube-controller-manager-embed-certs-903819" [43c3944a-3dfd-44ce-ba68-baebbced4406] Running
	I0108 22:25:33.055841  375293 system_pods.go:89] "kube-proxy-hqj9b" [14b3f3bd-1d65-4382-adc2-09344b54463d] Running
	I0108 22:25:33.055845  375293 system_pods.go:89] "kube-scheduler-embed-certs-903819" [9c004a9c-c77a-4ee5-970d-db41ddc26439] Running
	I0108 22:25:33.055852  375293 system_pods.go:89] "metrics-server-57f55c9bc5-qhjlv" [f1bff39b-c944-4de0-a5b8-eb239e91c6db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:25:33.055859  375293 system_pods.go:89] "storage-provisioner" [949c6275-6836-4035-89f5-f2d2c2caaa89] Running
	I0108 22:25:33.055872  375293 system_pods.go:126] duration metric: took 8.323722ms to wait for k8s-apps to be running ...
	I0108 22:25:33.055881  375293 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:25:33.055939  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:25:33.074598  375293 system_svc.go:56] duration metric: took 18.695286ms WaitForService to wait for kubelet.
	I0108 22:25:33.074637  375293 kubeadm.go:581] duration metric: took 4m17.012976103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:25:33.074671  375293 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:25:33.079188  375293 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:25:33.079227  375293 node_conditions.go:123] node cpu capacity is 2
	I0108 22:25:33.079246  375293 node_conditions.go:105] duration metric: took 4.559946ms to run NodePressure ...
	I0108 22:25:33.079261  375293 start.go:228] waiting for startup goroutines ...
	I0108 22:25:33.079270  375293 start.go:233] waiting for cluster config update ...
	I0108 22:25:33.079283  375293 start.go:242] writing updated cluster config ...
	I0108 22:25:33.079792  375293 ssh_runner.go:195] Run: rm -f paused
	I0108 22:25:33.144148  375293 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:25:33.146897  375293 out.go:177] * Done! kubectl is now configured to use "embed-certs-903819" cluster and "default" namespace by default
	I0108 22:25:32.349693  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:34.845955  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:36.851909  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:39.348575  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:41.350957  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:43.848565  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:46.348360  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:48.847346  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:51.346764  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:53.849331  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:56.349683  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:58.350457  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:00.847803  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:03.352522  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:05.844769  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:07.846346  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:09.848453  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:11.850250  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:14.347576  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:15.349616  375556 pod_ready.go:81] duration metric: took 4m0.011802861s waiting for pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace to be "Ready" ...
	E0108 22:26:15.349643  375556 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 22:26:15.349651  375556 pod_ready.go:38] duration metric: took 4m2.748998751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:26:15.349666  375556 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:26:15.349720  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:15.349773  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:15.414233  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:15.414273  375556 cri.go:89] found id: ""
	I0108 22:26:15.414286  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:15.414367  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.421348  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:15.421439  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:15.480484  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:15.480508  375556 cri.go:89] found id: ""
	I0108 22:26:15.480517  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:15.480569  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.486049  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:15.486125  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:15.551549  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:15.551588  375556 cri.go:89] found id: ""
	I0108 22:26:15.551600  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:15.551665  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.556950  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:15.557035  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:15.607375  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:15.607417  375556 cri.go:89] found id: ""
	I0108 22:26:15.607433  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:15.607530  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.613182  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:15.613253  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:15.663780  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:15.663805  375556 cri.go:89] found id: ""
	I0108 22:26:15.663813  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:15.663882  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.668629  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:15.668748  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:15.722341  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:15.722370  375556 cri.go:89] found id: ""
	I0108 22:26:15.722380  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:15.722453  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.727974  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:15.728089  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:15.782298  375556 cri.go:89] found id: ""
	I0108 22:26:15.782331  375556 logs.go:284] 0 containers: []
	W0108 22:26:15.782349  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:15.782358  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:15.782436  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:15.836150  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:15.836194  375556 cri.go:89] found id: ""
	I0108 22:26:15.836207  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:15.836307  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.842152  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:15.842184  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:15.900314  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:15.900378  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:15.974860  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:15.974903  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:16.021465  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:16.021529  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:16.477647  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:16.477706  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:16.588562  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:16.588615  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:16.604310  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:16.604383  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:16.770738  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:16.770778  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:16.835271  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:16.835320  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:16.899297  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:16.899354  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:16.957508  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:16.957549  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:17.001214  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:17.001255  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:19.561271  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:26:19.578731  375556 api_server.go:72] duration metric: took 4m10.049236985s to wait for apiserver process to appear ...
	I0108 22:26:19.578768  375556 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:26:19.578821  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:19.578897  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:19.630380  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:19.630410  375556 cri.go:89] found id: ""
	I0108 22:26:19.630422  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:19.630496  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.635902  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:19.635998  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:19.682023  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:19.682057  375556 cri.go:89] found id: ""
	I0108 22:26:19.682072  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:19.682143  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.688443  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:19.688567  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:19.738612  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:19.738651  375556 cri.go:89] found id: ""
	I0108 22:26:19.738664  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:19.738790  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.745590  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:19.745726  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:19.796647  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:19.796674  375556 cri.go:89] found id: ""
	I0108 22:26:19.796685  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:19.796747  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.801789  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:19.801872  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:19.846026  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:19.846060  375556 cri.go:89] found id: ""
	I0108 22:26:19.846070  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:19.846150  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.851227  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:19.851299  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:19.906135  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:19.906173  375556 cri.go:89] found id: ""
	I0108 22:26:19.906184  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:19.906267  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.911914  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:19.912048  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:19.960064  375556 cri.go:89] found id: ""
	I0108 22:26:19.960104  375556 logs.go:284] 0 containers: []
	W0108 22:26:19.960117  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:19.960126  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:19.960198  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:20.010136  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:20.010171  375556 cri.go:89] found id: ""
	I0108 22:26:20.010181  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:20.010256  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:20.015368  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:20.015402  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:20.122508  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:20.122575  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:20.272565  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:20.272610  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:20.335281  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:20.335334  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:20.384028  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:20.384088  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:20.779192  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:20.779250  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:20.795137  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:20.795170  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:20.863312  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:20.863395  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:20.918084  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:20.918132  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:20.966066  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:20.966108  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:21.030610  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:21.030704  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:21.083525  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:21.083567  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:16:28 UTC, ends at Mon 2024-01-08 22:26:22 UTC. --
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.645486007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704752782645472523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=9e355ca3-5224-443e-bc09-0f57e7dcc5a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.646472364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7bdc042a-60a2-4e0d-bc2d-e80f2b542954 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.646555631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7bdc042a-60a2-4e0d-bc2d-e80f2b542954 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.646771155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752259920851228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d8a800f2a2b2f67d1ea5e05ba4caff1f20a555e2af9dd6eadddc72619ba876,PodSandboxId:1886af1d1e6dcb202dab4ff33f61644a22ee706cd53e1c1cdd936c0b788dc54a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704752233573056054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc706965-4d2e-4bd5-a1c1-0616462e9840,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8a5331,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17,PodSandboxId:4bfe2f5311e83d5fe56a101d85af06bc3658e6014ba7457c593937a6db200d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704752232427113478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fzlzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f48e2d6f-a573-463f-b96e-9f96b3161d66,},Annotations:map[string]string{io.kubernetes.container.hash: 74221cba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420,PodSandboxId:cde50b732d649161d9432de65749fe4aef982535d64a8b6dbec2a514de5aae98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704752230515710640,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f37e50-5c82-4288-8cf8
-cb1c576c7472,},Annotations:map[string]string{io.kubernetes.container.hash: 8f345252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704752229007568201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2
fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434,PodSandboxId:739ce810388b681fba1c9d1c993e89e06fc980d56fa3567bbdc2d1972fc9cb9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704752222645618242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39784c5a6adcc95506cfe25e9403f5d5,},Annotations:map[string]string{io.kube
rnetes.container.hash: be9ca32a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab,PodSandboxId:b709e0e02c865c6ac430c5bc6e4e9d3ce8a60c668ee4357037f895e5960ddba6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704752221051594376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592,PodSandboxId:8405cb736193700954fb2c65b085a476d7242091862e396b26750258a7a86cc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704752220967152669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b45d8726bdb80fb0dada6f51c1b17e,},Annotations:map[string]string{io.kubernetes.container.hash:
6ea9e46e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61,PodSandboxId:13e6538c94ae420201ad07f99e833b351472b7b05f1364053536309d1362a05d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704752220829540553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7bdc042a-60a2-4e0d-bc2d-e80f2b542954 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.693986562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4029460b-a045-4dfa-bb3f-06f58edd183f name=/runtime.v1.RuntimeService/Version
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.694084115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4029460b-a045-4dfa-bb3f-06f58edd183f name=/runtime.v1.RuntimeService/Version
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.696017104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c14ba781-5c72-4aa5-bb15-3862708ad79f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.696548449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704752782696521301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c14ba781-5c72-4aa5-bb15-3862708ad79f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.698835175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=09fec1a2-f0e5-4c6a-b739-bb1262874a4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.699400754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=09fec1a2-f0e5-4c6a-b739-bb1262874a4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.700140131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752259920851228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d8a800f2a2b2f67d1ea5e05ba4caff1f20a555e2af9dd6eadddc72619ba876,PodSandboxId:1886af1d1e6dcb202dab4ff33f61644a22ee706cd53e1c1cdd936c0b788dc54a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704752233573056054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc706965-4d2e-4bd5-a1c1-0616462e9840,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8a5331,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17,PodSandboxId:4bfe2f5311e83d5fe56a101d85af06bc3658e6014ba7457c593937a6db200d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704752232427113478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fzlzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f48e2d6f-a573-463f-b96e-9f96b3161d66,},Annotations:map[string]string{io.kubernetes.container.hash: 74221cba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420,PodSandboxId:cde50b732d649161d9432de65749fe4aef982535d64a8b6dbec2a514de5aae98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704752230515710640,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f37e50-5c82-4288-8cf8
-cb1c576c7472,},Annotations:map[string]string{io.kubernetes.container.hash: 8f345252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704752229007568201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2
fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434,PodSandboxId:739ce810388b681fba1c9d1c993e89e06fc980d56fa3567bbdc2d1972fc9cb9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704752222645618242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39784c5a6adcc95506cfe25e9403f5d5,},Annotations:map[string]string{io.kube
rnetes.container.hash: be9ca32a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab,PodSandboxId:b709e0e02c865c6ac430c5bc6e4e9d3ce8a60c668ee4357037f895e5960ddba6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704752221051594376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592,PodSandboxId:8405cb736193700954fb2c65b085a476d7242091862e396b26750258a7a86cc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704752220967152669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b45d8726bdb80fb0dada6f51c1b17e,},Annotations:map[string]string{io.kubernetes.container.hash:
6ea9e46e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61,PodSandboxId:13e6538c94ae420201ad07f99e833b351472b7b05f1364053536309d1362a05d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704752220829540553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=09fec1a2-f0e5-4c6a-b739-bb1262874a4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.755090461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=26c441b7-4c27-46fc-8081-0d70c22a1376 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.755181789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=26c441b7-4c27-46fc-8081-0d70c22a1376 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.756907346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0cadf68d-bb12-49f8-870f-3000ba9300bc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.757464795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704752782757448295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=0cadf68d-bb12-49f8-870f-3000ba9300bc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.758361056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=27b72cf4-7451-4ffd-9d4a-2cac5bb7212f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.758414915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=27b72cf4-7451-4ffd-9d4a-2cac5bb7212f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.758642167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752259920851228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d8a800f2a2b2f67d1ea5e05ba4caff1f20a555e2af9dd6eadddc72619ba876,PodSandboxId:1886af1d1e6dcb202dab4ff33f61644a22ee706cd53e1c1cdd936c0b788dc54a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704752233573056054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc706965-4d2e-4bd5-a1c1-0616462e9840,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8a5331,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17,PodSandboxId:4bfe2f5311e83d5fe56a101d85af06bc3658e6014ba7457c593937a6db200d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704752232427113478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fzlzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f48e2d6f-a573-463f-b96e-9f96b3161d66,},Annotations:map[string]string{io.kubernetes.container.hash: 74221cba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420,PodSandboxId:cde50b732d649161d9432de65749fe4aef982535d64a8b6dbec2a514de5aae98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704752230515710640,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f37e50-5c82-4288-8cf8
-cb1c576c7472,},Annotations:map[string]string{io.kubernetes.container.hash: 8f345252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704752229007568201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2
fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434,PodSandboxId:739ce810388b681fba1c9d1c993e89e06fc980d56fa3567bbdc2d1972fc9cb9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704752222645618242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39784c5a6adcc95506cfe25e9403f5d5,},Annotations:map[string]string{io.kube
rnetes.container.hash: be9ca32a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab,PodSandboxId:b709e0e02c865c6ac430c5bc6e4e9d3ce8a60c668ee4357037f895e5960ddba6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704752221051594376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592,PodSandboxId:8405cb736193700954fb2c65b085a476d7242091862e396b26750258a7a86cc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704752220967152669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b45d8726bdb80fb0dada6f51c1b17e,},Annotations:map[string]string{io.kubernetes.container.hash:
6ea9e46e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61,PodSandboxId:13e6538c94ae420201ad07f99e833b351472b7b05f1364053536309d1362a05d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704752220829540553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=27b72cf4-7451-4ffd-9d4a-2cac5bb7212f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.804395479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4b5abb07-d589-4e26-8f14-bf2fc659d070 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.804473882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4b5abb07-d589-4e26-8f14-bf2fc659d070 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.806418066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=19698676-a83f-4edb-bea1-d57c12d806ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.807114498Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704752782807083181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=19698676-a83f-4edb-bea1-d57c12d806ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.808344433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a87d329-fb68-4943-b0fa-c0154c3dd7e9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.808430121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a87d329-fb68-4943-b0fa-c0154c3dd7e9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:26:22 old-k8s-version-079759 crio[716]: time="2024-01-08 22:26:22.808802957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752259920851228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d8a800f2a2b2f67d1ea5e05ba4caff1f20a555e2af9dd6eadddc72619ba876,PodSandboxId:1886af1d1e6dcb202dab4ff33f61644a22ee706cd53e1c1cdd936c0b788dc54a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704752233573056054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc706965-4d2e-4bd5-a1c1-0616462e9840,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8a5331,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17,PodSandboxId:4bfe2f5311e83d5fe56a101d85af06bc3658e6014ba7457c593937a6db200d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704752232427113478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fzlzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f48e2d6f-a573-463f-b96e-9f96b3161d66,},Annotations:map[string]string{io.kubernetes.container.hash: 74221cba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420,PodSandboxId:cde50b732d649161d9432de65749fe4aef982535d64a8b6dbec2a514de5aae98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704752230515710640,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f37e50-5c82-4288-8cf8
-cb1c576c7472,},Annotations:map[string]string{io.kubernetes.container.hash: 8f345252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704752229007568201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2
fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434,PodSandboxId:739ce810388b681fba1c9d1c993e89e06fc980d56fa3567bbdc2d1972fc9cb9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704752222645618242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39784c5a6adcc95506cfe25e9403f5d5,},Annotations:map[string]string{io.kube
rnetes.container.hash: be9ca32a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab,PodSandboxId:b709e0e02c865c6ac430c5bc6e4e9d3ce8a60c668ee4357037f895e5960ddba6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704752221051594376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592,PodSandboxId:8405cb736193700954fb2c65b085a476d7242091862e396b26750258a7a86cc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704752220967152669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b45d8726bdb80fb0dada6f51c1b17e,},Annotations:map[string]string{io.kubernetes.container.hash:
6ea9e46e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61,PodSandboxId:13e6538c94ae420201ad07f99e833b351472b7b05f1364053536309d1362a05d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704752220829540553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a87d329-fb68-4943-b0fa-c0154c3dd7e9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e59f9dbead2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       1                   2e411ea59ae37       storage-provisioner
	66d8a800f2a2b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                   0                   1886af1d1e6dc       busybox
	f11644eb8c5e5       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      9 minutes ago       Running             coredns                   0                   4bfe2f5311e83       coredns-5644d7b6d9-fzlzx
	d6357e946560e       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      9 minutes ago       Running             kube-proxy                0                   cde50b732d649       kube-proxy-mfs65
	4adf6d6ad1709       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   2e411ea59ae37       storage-provisioner
	37878737b7049       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      9 minutes ago       Running             etcd                      0                   739ce810388b6       etcd-old-k8s-version-079759
	f2a5eecdb0c68       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      9 minutes ago       Running             kube-scheduler            0                   b709e0e02c865       kube-scheduler-old-k8s-version-079759
	26d6552f76c38       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      9 minutes ago       Running             kube-apiserver            0                   8405cb7361937       kube-apiserver-old-k8s-version-079759
	bcbc4b306a60a       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      9 minutes ago       Running             kube-controller-manager   0                   13e6538c94ae4       kube-controller-manager-old-k8s-version-079759
	
	
	==> coredns [f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17] <==
	E0108 22:07:57.615159       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=478&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639459       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=482&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639698       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:15.456904       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2024-01-08T22:07:23.769Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	2024-01-08T22:07:23.800Z [INFO] 127.0.0.1:42162 - 57998 "HINFO IN 7314273592572006048.1780055050944407881. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030508662s
	E0108 22:07:57.615159       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=478&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.615159       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=478&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.615159       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=478&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639459       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=482&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639459       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=482&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639459       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=482&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639698       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639698       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639698       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-08T22:17:12.800Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-08T22:17:12.800Z [INFO] CoreDNS-1.6.2
	2024-01-08T22:17:12.800Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-08T22:17:12.845Z [INFO] 127.0.0.1:56141 - 45734 "HINFO IN 6412304003339905310.5487353666919062536. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044314845s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-079759
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-079759
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=old-k8s-version-079759
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_06_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:06:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:25:38 +0000   Mon, 08 Jan 2024 22:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:25:38 +0000   Mon, 08 Jan 2024 22:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:25:38 +0000   Mon, 08 Jan 2024 22:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:25:38 +0000   Mon, 08 Jan 2024 22:17:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    old-k8s-version-079759
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 a54b7c7cd22d472991831b6fcc8e5a4e
	 System UUID:                a54b7c7c-d22d-4729-9183-1b6fcc8e5a4e
	 Boot ID:                    0790ceb3-d2f6-4d4f-b3d6-8760fffda9df
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                coredns-5644d7b6d9-fzlzx                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                etcd-old-k8s-version-079759                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-apiserver-old-k8s-version-079759             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-controller-manager-old-k8s-version-079759    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	  kube-system                kube-proxy-mfs65                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-scheduler-old-k8s-version-079759             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                metrics-server-74d5856cc6-sdlnw                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         8m58s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)      kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)      kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)      kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                    kube-proxy, old-k8s-version-079759  Starting kube-proxy.
	  Normal  Starting                 9m24s                  kubelet, old-k8s-version-079759     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x7 over 9m24s)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x8 over 9m24s)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet, old-k8s-version-079759     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m13s                  kube-proxy, old-k8s-version-079759  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 8 22:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077820] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.134270] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.734516] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.177995] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.755367] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.913808] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.130350] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.183623] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.132893] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.281141] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +20.233746] systemd-fstab-generator[1028]: Ignoring "noauto" for root device
	[  +0.519162] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 8 22:17] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434] <==
	2024-01-08 22:17:02.754486 I | etcdserver: heartbeat = 100ms
	2024-01-08 22:17:02.754500 I | etcdserver: election = 1000ms
	2024-01-08 22:17:02.754515 I | etcdserver: snapshot count = 10000
	2024-01-08 22:17:02.754537 I | etcdserver: advertise client URLs = https://192.168.39.183:2379
	2024-01-08 22:17:02.759016 I | etcdserver: restarting member f87838631c8138de in cluster 2dc4003dc2fbf749 at commit index 520
	2024-01-08 22:17:02.759375 I | raft: f87838631c8138de became follower at term 2
	2024-01-08 22:17:02.759490 I | raft: newRaft f87838631c8138de [peers: [], term: 2, commit: 520, applied: 0, lastindex: 520, lastterm: 2]
	2024-01-08 22:17:02.776353 W | auth: simple token is not cryptographically signed
	2024-01-08 22:17:02.780321 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-08 22:17:02.782618 I | etcdserver/membership: added member f87838631c8138de [https://192.168.39.183:2380] to cluster 2dc4003dc2fbf749
	2024-01-08 22:17:02.782712 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 22:17:02.783113 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-08 22:17:02.783238 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-08 22:17:02.783522 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-08 22:17:02.783802 I | embed: listening for metrics on http://192.168.39.183:2381
	2024-01-08 22:17:04.560584 I | raft: f87838631c8138de is starting a new election at term 2
	2024-01-08 22:17:04.560622 I | raft: f87838631c8138de became candidate at term 3
	2024-01-08 22:17:04.560638 I | raft: f87838631c8138de received MsgVoteResp from f87838631c8138de at term 3
	2024-01-08 22:17:04.560650 I | raft: f87838631c8138de became leader at term 3
	2024-01-08 22:17:04.560656 I | raft: raft.node: f87838631c8138de elected leader f87838631c8138de at term 3
	2024-01-08 22:17:04.563698 I | embed: ready to serve client requests
	2024-01-08 22:17:04.564468 I | etcdserver: published {Name:old-k8s-version-079759 ClientURLs:[https://192.168.39.183:2379]} to cluster 2dc4003dc2fbf749
	2024-01-08 22:17:04.564640 I | embed: ready to serve client requests
	2024-01-08 22:17:04.565648 I | embed: serving client requests on 192.168.39.183:2379
	2024-01-08 22:17:04.566107 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 22:26:23 up 10 min,  0 users,  load average: 0.16, 0.23, 0.15
	Linux old-k8s-version-079759 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592] <==
	I0108 22:18:10.099676       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:18:10.099805       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:18:10.099856       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:18:10.099872       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:20:10.100234       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:20:10.100396       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:20:10.100463       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:20:10.100471       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:22:09.208988       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:22:09.209374       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:22:09.209486       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:22:09.209524       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:23:09.209890       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:23:09.210073       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:23:09.210115       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:23:09.210122       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:25:09.210612       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:25:09.211032       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:25:09.211150       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:25:09.211199       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61] <==
	E0108 22:19:57.731236       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:20:08.558186       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:20:27.983734       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:20:40.560823       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:20:58.237051       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:21:12.563787       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:21:28.490258       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:21:44.566877       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:21:58.742659       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:22:16.570116       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:22:28.995036       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:22:48.572456       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:22:59.249229       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:23:20.575095       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:23:29.501601       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:23:52.577724       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:23:59.754289       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:24:24.580616       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:24:30.006712       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:24:56.583646       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:25:00.259066       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:25:28.585488       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:25:30.511048       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:26:00.588094       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:26:00.763724       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420] <==
	W0108 22:06:46.668750       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 22:06:46.684160       1 node.go:135] Successfully retrieved node IP: 192.168.39.183
	I0108 22:06:46.684559       1 server_others.go:149] Using iptables Proxier.
	I0108 22:06:46.685373       1 server.go:529] Version: v1.16.0
	I0108 22:06:46.691810       1 config.go:313] Starting service config controller
	I0108 22:06:46.691888       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 22:06:46.691920       1 config.go:131] Starting endpoints config controller
	I0108 22:06:46.691955       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 22:06:46.797198       1 shared_informer.go:204] Caches are synced for service config 
	I0108 22:06:46.797340       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0108 22:17:10.730439       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 22:17:10.743302       1 node.go:135] Successfully retrieved node IP: 192.168.39.183
	I0108 22:17:10.743368       1 server_others.go:149] Using iptables Proxier.
	I0108 22:17:10.744060       1 server.go:529] Version: v1.16.0
	I0108 22:17:10.745785       1 config.go:131] Starting endpoints config controller
	I0108 22:17:10.745849       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 22:17:10.746175       1 config.go:313] Starting service config controller
	I0108 22:17:10.746222       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 22:17:10.846841       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0108 22:17:10.849109       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab] <==
	E0108 22:06:23.068050       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:06:23.074906       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:06:24.065000       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:06:24.066937       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:06:24.069589       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:06:24.069684       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:06:24.071043       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:06:24.072070       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:06:24.073780       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:06:24.074768       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:06:24.076525       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:06:24.080024       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:06:24.080662       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:06:42.931136       1 factory.go:585] pod is already present in the activeQ
	I0108 22:17:02.255631       1 serving.go:319] Generated self-signed cert in-memory
	W0108 22:17:08.134473       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 22:17:08.134696       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:17:08.134729       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 22:17:08.134830       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 22:17:08.143399       1 server.go:143] Version: v1.16.0
	I0108 22:17:08.147227       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0108 22:17:08.156754       1 authorization.go:47] Authorization is disabled
	W0108 22:17:08.157027       1 authentication.go:79] Authentication is disabled
	I0108 22:17:08.161064       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0108 22:17:08.171336       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:16:28 UTC, ends at Mon 2024-01-08 22:26:23 UTC. --
	Jan 08 22:21:53 old-k8s-version-079759 kubelet[1034]: E0108 22:21:53.681177    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:21:59 old-k8s-version-079759 kubelet[1034]: E0108 22:21:59.758814    1034 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 08 22:22:05 old-k8s-version-079759 kubelet[1034]: E0108 22:22:05.681491    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:22:18 old-k8s-version-079759 kubelet[1034]: E0108 22:22:18.681100    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:22:29 old-k8s-version-079759 kubelet[1034]: E0108 22:22:29.681319    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:22:44 old-k8s-version-079759 kubelet[1034]: E0108 22:22:44.680542    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:22:55 old-k8s-version-079759 kubelet[1034]: E0108 22:22:55.682011    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:23:10 old-k8s-version-079759 kubelet[1034]: E0108 22:23:10.695442    1034 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 22:23:10 old-k8s-version-079759 kubelet[1034]: E0108 22:23:10.695554    1034 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 22:23:10 old-k8s-version-079759 kubelet[1034]: E0108 22:23:10.695618    1034 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 22:23:10 old-k8s-version-079759 kubelet[1034]: E0108 22:23:10.695651    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 08 22:23:24 old-k8s-version-079759 kubelet[1034]: E0108 22:23:24.681871    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:23:37 old-k8s-version-079759 kubelet[1034]: E0108 22:23:37.682376    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:23:50 old-k8s-version-079759 kubelet[1034]: E0108 22:23:50.680587    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:24:01 old-k8s-version-079759 kubelet[1034]: E0108 22:24:01.682732    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:24:13 old-k8s-version-079759 kubelet[1034]: E0108 22:24:13.681392    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:24:26 old-k8s-version-079759 kubelet[1034]: E0108 22:24:26.681082    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:24:41 old-k8s-version-079759 kubelet[1034]: E0108 22:24:41.680654    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:24:56 old-k8s-version-079759 kubelet[1034]: E0108 22:24:56.681818    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:25:10 old-k8s-version-079759 kubelet[1034]: E0108 22:25:10.681083    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:25:22 old-k8s-version-079759 kubelet[1034]: E0108 22:25:22.680887    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:25:34 old-k8s-version-079759 kubelet[1034]: E0108 22:25:34.681302    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:25:49 old-k8s-version-079759 kubelet[1034]: E0108 22:25:49.681788    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:26:02 old-k8s-version-079759 kubelet[1034]: E0108 22:26:02.680803    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:26:16 old-k8s-version-079759 kubelet[1034]: E0108 22:26:16.681817    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f] <==
	I0108 22:06:46.639039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 22:07:16.642423       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	I0108 22:17:09.165520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 22:17:39.175222       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9] <==
	I0108 22:07:17.094878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:07:17.114141       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:07:17.114405       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:07:17.133989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:07:17.135022       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48697b63-5676-4f6a-8f67-c0b173c18024", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-079759_5ab639fe-eef1-4024-8927-a3fde7e1b1d8 became leader
	I0108 22:07:17.136536       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-079759_5ab639fe-eef1-4024-8927-a3fde7e1b1d8!
	I0108 22:07:17.238017       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-079759_5ab639fe-eef1-4024-8927-a3fde7e1b1d8!
	I0108 22:17:40.069803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:17:40.083344       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:17:40.083419       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:17:57.491284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:17:57.492309       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48697b63-5676-4f6a-8f67-c0b173c18024", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-079759_3d1860c1-539c-4eb4-b1e7-aacf51850f57 became leader
	I0108 22:17:57.492430       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-079759_3d1860c1-539c-4eb4-b1e7-aacf51850f57!
	I0108 22:17:57.593228       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-079759_3d1860c1-539c-4eb4-b1e7-aacf51850f57!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079759 -n old-k8s-version-079759
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-079759 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-sdlnw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-079759 describe pod metrics-server-74d5856cc6-sdlnw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-079759 describe pod metrics-server-74d5856cc6-sdlnw: exit status 1 (85.44269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-sdlnw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-079759 describe pod metrics-server-74d5856cc6-sdlnw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 22:21:20.147190  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 22:22:44.574693  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 22:24:44.964281  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:24:56.854149  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675668 -n no-preload-675668
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-08 22:30:08.679153759 +0000 UTC m=+5284.380276099
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-675668 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-675668 logs -n 25: (1.968334404s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-523607                              | cert-expiration-523607       | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343954 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | disable-driver-mounts-343954                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:09 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079759        | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC | 08 Jan 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-675668             | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-903819            | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-292054  | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC | 08 Jan 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079759             | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-675668                  | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-903819                 | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-292054       | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:26 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:11:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:11:46.087099  375556 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:11:46.087257  375556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:46.087268  375556 out.go:309] Setting ErrFile to fd 2...
	I0108 22:11:46.087273  375556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:46.087523  375556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:11:46.088153  375556 out.go:303] Setting JSON to false
	I0108 22:11:46.089299  375556 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10432,"bootTime":1704741474,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:11:46.089374  375556 start.go:138] virtualization: kvm guest
	I0108 22:11:46.092180  375556 out.go:177] * [default-k8s-diff-port-292054] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:11:46.093649  375556 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:11:46.093727  375556 notify.go:220] Checking for updates...
	I0108 22:11:46.095251  375556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:11:46.097142  375556 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:11:46.099048  375556 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:11:46.100864  375556 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:11:46.102762  375556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:11:46.105085  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:11:46.105575  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:11:46.105654  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:11:46.122253  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0108 22:11:46.122758  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:11:46.123342  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:11:46.123412  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:11:46.123752  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:11:46.123910  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:11:46.124157  375556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:11:46.124499  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:11:46.124539  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:11:46.140751  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0108 22:11:46.141282  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:11:46.141773  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:11:46.141798  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:11:46.142141  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:11:46.142444  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:11:46.184643  375556 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 22:11:46.186001  375556 start.go:298] selected driver: kvm2
	I0108 22:11:46.186020  375556 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:11:46.186148  375556 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:11:46.186947  375556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:46.187023  375556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:11:46.203781  375556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:11:46.204243  375556 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:11:46.204341  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:11:46.204355  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:11:46.204368  375556 start_flags.go:321] config:
	{Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-29205
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:11:46.204574  375556 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:46.206922  375556 out.go:177] * Starting control plane node default-k8s-diff-port-292054 in cluster default-k8s-diff-port-292054
	I0108 22:11:49.059974  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:11:46.208771  375556 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:11:46.208837  375556 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:11:46.208846  375556 cache.go:56] Caching tarball of preloaded images
	I0108 22:11:46.208953  375556 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:11:46.208964  375556 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:11:46.209090  375556 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:11:46.209292  375556 start.go:365] acquiring machines lock for default-k8s-diff-port-292054: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:11:52.131718  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:11:58.211727  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:01.283728  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:07.363651  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:10.435843  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:16.515718  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:19.587893  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:25.667716  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:28.739741  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:34.819670  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:37.891747  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:43.971702  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:47.043706  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:53.123662  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:56.195726  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:02.275699  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:05.347708  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:11.427670  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:14.499733  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:20.579716  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:23.651809  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:29.731813  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:32.803834  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:38.883645  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:41.955722  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:48.035781  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:51.107833  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:57.187725  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:00.259743  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:06.339763  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:09.411776  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:15.491797  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:18.563880  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:24.643806  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:27.715717  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:33.795783  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:36.867725  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:42.947651  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:46.019719  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:52.099719  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:55.171662  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:01.251699  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:04.323666  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:07.328244  375205 start.go:369] acquired machines lock for "no-preload-675668" in 4m2.333038111s
	I0108 22:15:07.328384  375205 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:07.328398  375205 fix.go:54] fixHost starting: 
	I0108 22:15:07.328972  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:07.329012  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:07.346002  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0108 22:15:07.346606  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:07.347087  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:15:07.347112  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:07.347614  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:07.347816  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:07.347977  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:15:07.349843  375205 fix.go:102] recreateIfNeeded on no-preload-675668: state=Stopped err=<nil>
	I0108 22:15:07.349873  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	W0108 22:15:07.350055  375205 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:07.352092  375205 out.go:177] * Restarting existing kvm2 VM for "no-preload-675668" ...
	I0108 22:15:07.325708  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:07.325751  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:15:07.327981  374880 machine.go:91] provisioned docker machine in 4m37.376179376s
	I0108 22:15:07.328067  374880 fix.go:56] fixHost completed within 4m37.402208453s
	I0108 22:15:07.328080  374880 start.go:83] releasing machines lock for "old-k8s-version-079759", held for 4m37.402236557s
	W0108 22:15:07.328149  374880 start.go:694] error starting host: provision: host is not running
	W0108 22:15:07.328386  374880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 22:15:07.328401  374880 start.go:709] Will try again in 5 seconds ...
	I0108 22:15:07.353648  375205 main.go:141] libmachine: (no-preload-675668) Calling .Start
	I0108 22:15:07.353904  375205 main.go:141] libmachine: (no-preload-675668) Ensuring networks are active...
	I0108 22:15:07.354917  375205 main.go:141] libmachine: (no-preload-675668) Ensuring network default is active
	I0108 22:15:07.355390  375205 main.go:141] libmachine: (no-preload-675668) Ensuring network mk-no-preload-675668 is active
	I0108 22:15:07.355764  375205 main.go:141] libmachine: (no-preload-675668) Getting domain xml...
	I0108 22:15:07.356506  375205 main.go:141] libmachine: (no-preload-675668) Creating domain...
	I0108 22:15:08.673735  375205 main.go:141] libmachine: (no-preload-675668) Waiting to get IP...
	I0108 22:15:08.674861  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:08.675407  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:08.675502  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:08.675369  376073 retry.go:31] will retry after 298.445271ms: waiting for machine to come up
	I0108 22:15:08.976053  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:08.976594  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:08.976624  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:08.976525  376073 retry.go:31] will retry after 372.862343ms: waiting for machine to come up
	I0108 22:15:09.351338  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:09.351843  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:09.351864  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:09.351801  376073 retry.go:31] will retry after 463.145179ms: waiting for machine to come up
	I0108 22:15:09.816629  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:09.817035  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:09.817059  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:09.816979  376073 retry.go:31] will retry after 390.229237ms: waiting for machine to come up
	I0108 22:15:12.328668  374880 start.go:365] acquiring machines lock for old-k8s-version-079759: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:15:10.208639  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:10.209034  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:10.209068  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:10.208972  376073 retry.go:31] will retry after 547.133251ms: waiting for machine to come up
	I0108 22:15:10.758143  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:10.758742  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:10.758779  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:10.758673  376073 retry.go:31] will retry after 833.304996ms: waiting for machine to come up
	I0108 22:15:11.594018  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:11.594517  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:11.594551  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:11.594482  376073 retry.go:31] will retry after 1.155542967s: waiting for machine to come up
	I0108 22:15:12.751694  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:12.752196  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:12.752233  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:12.752162  376073 retry.go:31] will retry after 1.197873107s: waiting for machine to come up
	I0108 22:15:13.951593  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:13.952050  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:13.952072  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:13.952005  376073 retry.go:31] will retry after 1.257059014s: waiting for machine to come up
	I0108 22:15:15.211632  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:15.212133  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:15.212161  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:15.212090  376073 retry.go:31] will retry after 2.27321783s: waiting for machine to come up
	I0108 22:15:17.487177  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:17.487684  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:17.487712  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:17.487631  376073 retry.go:31] will retry after 2.218202362s: waiting for machine to come up
	I0108 22:15:19.709130  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:19.709618  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:19.709651  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:19.709552  376073 retry.go:31] will retry after 2.976711307s: waiting for machine to come up
	I0108 22:15:22.687741  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:22.688337  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:22.688373  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:22.688238  376073 retry.go:31] will retry after 4.028238242s: waiting for machine to come up
	I0108 22:15:28.088862  375293 start.go:369] acquired machines lock for "embed-certs-903819" in 4m15.164556555s
	I0108 22:15:28.088954  375293 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:28.088965  375293 fix.go:54] fixHost starting: 
	I0108 22:15:28.089472  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:28.089526  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:28.108636  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0108 22:15:28.109141  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:28.109765  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:15:28.109816  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:28.110214  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:28.110458  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:28.110642  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:15:28.112595  375293 fix.go:102] recreateIfNeeded on embed-certs-903819: state=Stopped err=<nil>
	I0108 22:15:28.112635  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	W0108 22:15:28.112883  375293 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:28.115226  375293 out.go:177] * Restarting existing kvm2 VM for "embed-certs-903819" ...
	I0108 22:15:26.721451  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.721880  375205 main.go:141] libmachine: (no-preload-675668) Found IP for machine: 192.168.61.153
	I0108 22:15:26.721905  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has current primary IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.721912  375205 main.go:141] libmachine: (no-preload-675668) Reserving static IP address...
	I0108 22:15:26.722449  375205 main.go:141] libmachine: (no-preload-675668) Reserved static IP address: 192.168.61.153
	I0108 22:15:26.722475  375205 main.go:141] libmachine: (no-preload-675668) Waiting for SSH to be available...
	I0108 22:15:26.722498  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "no-preload-675668", mac: "52:54:00:08:3b:59", ip: "192.168.61.153"} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.722528  375205 main.go:141] libmachine: (no-preload-675668) DBG | skip adding static IP to network mk-no-preload-675668 - found existing host DHCP lease matching {name: "no-preload-675668", mac: "52:54:00:08:3b:59", ip: "192.168.61.153"}
	I0108 22:15:26.722545  375205 main.go:141] libmachine: (no-preload-675668) DBG | Getting to WaitForSSH function...
	I0108 22:15:26.724512  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.724861  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.724898  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.725004  375205 main.go:141] libmachine: (no-preload-675668) DBG | Using SSH client type: external
	I0108 22:15:26.725078  375205 main.go:141] libmachine: (no-preload-675668) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa (-rw-------)
	I0108 22:15:26.725130  375205 main.go:141] libmachine: (no-preload-675668) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:15:26.725152  375205 main.go:141] libmachine: (no-preload-675668) DBG | About to run SSH command:
	I0108 22:15:26.725172  375205 main.go:141] libmachine: (no-preload-675668) DBG | exit 0
	I0108 22:15:26.815569  375205 main.go:141] libmachine: (no-preload-675668) DBG | SSH cmd err, output: <nil>: 
	I0108 22:15:26.816005  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetConfigRaw
	I0108 22:15:26.816711  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:26.819269  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.819636  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.819681  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.819964  375205 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/config.json ...
	I0108 22:15:26.820191  375205 machine.go:88] provisioning docker machine ...
	I0108 22:15:26.820215  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:26.820446  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:26.820626  375205 buildroot.go:166] provisioning hostname "no-preload-675668"
	I0108 22:15:26.820648  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:26.820790  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:26.823021  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.823390  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.823421  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.823567  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:26.823781  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.823943  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.824103  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:26.824331  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:26.824924  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:26.824958  375205 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-675668 && echo "no-preload-675668" | sudo tee /etc/hostname
	I0108 22:15:26.960664  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-675668
	
	I0108 22:15:26.960713  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:26.964110  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.964397  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.964437  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.964605  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:26.964918  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.965153  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.965334  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:26.965543  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:26.965958  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:26.965985  375205 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-675668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-675668/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-675668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:15:27.102584  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:27.102632  375205 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:15:27.102663  375205 buildroot.go:174] setting up certificates
	I0108 22:15:27.102678  375205 provision.go:83] configureAuth start
	I0108 22:15:27.102688  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:27.103024  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:27.105986  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.106379  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.106400  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.106586  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.108670  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.109003  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.109029  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.109216  375205 provision.go:138] copyHostCerts
	I0108 22:15:27.109300  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:15:27.109320  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:15:27.109426  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:15:27.109561  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:15:27.109571  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:15:27.109599  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:15:27.109663  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:15:27.109670  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:15:27.109691  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:15:27.109751  375205 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.no-preload-675668 san=[192.168.61.153 192.168.61.153 localhost 127.0.0.1 minikube no-preload-675668]
	I0108 22:15:27.297801  375205 provision.go:172] copyRemoteCerts
	I0108 22:15:27.297888  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:15:27.297915  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.301050  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.301503  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.301545  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.301737  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.301955  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.302121  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.302265  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:27.394076  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:15:27.420873  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:15:27.446852  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:15:27.475352  375205 provision.go:86] duration metric: configureAuth took 372.6598ms
	I0108 22:15:27.475406  375205 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:15:27.475661  375205 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:15:27.475793  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.478557  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.478872  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.478906  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.479091  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.479354  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.479579  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.479768  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.479939  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:27.480273  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:27.480291  375205 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:15:27.822802  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:15:27.822834  375205 machine.go:91] provisioned docker machine in 1.002628424s
	I0108 22:15:27.822845  375205 start.go:300] post-start starting for "no-preload-675668" (driver="kvm2")
	I0108 22:15:27.822858  375205 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:15:27.822874  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:27.823282  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:15:27.823320  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.825948  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.826276  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.826298  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.826407  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.826597  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.826793  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.826922  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:27.918118  375205 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:15:27.922998  375205 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:15:27.923044  375205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:15:27.923151  375205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:15:27.923275  375205 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:15:27.923407  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:15:27.933715  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:27.960061  375205 start.go:303] post-start completed in 137.19795ms
	I0108 22:15:27.960109  375205 fix.go:56] fixHost completed within 20.631710493s
	I0108 22:15:27.960137  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.963254  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.963656  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.963688  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.964017  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.964325  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.964533  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.964722  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.964945  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:27.965301  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:27.965314  375205 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:15:28.088665  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752128.028688224
	
	I0108 22:15:28.088696  375205 fix.go:206] guest clock: 1704752128.028688224
	I0108 22:15:28.088706  375205 fix.go:219] Guest: 2024-01-08 22:15:28.028688224 +0000 UTC Remote: 2024-01-08 22:15:27.960113957 +0000 UTC m=+263.145626296 (delta=68.574267ms)
	I0108 22:15:28.088734  375205 fix.go:190] guest clock delta is within tolerance: 68.574267ms
	I0108 22:15:28.088742  375205 start.go:83] releasing machines lock for "no-preload-675668", held for 20.760456272s
	I0108 22:15:28.088775  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.089136  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:28.091887  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.092255  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.092274  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.092537  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093187  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093416  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093504  375205 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:15:28.093546  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:28.093722  375205 ssh_runner.go:195] Run: cat /version.json
	I0108 22:15:28.093769  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:28.096920  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.096969  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097385  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.097428  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097460  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.097482  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097739  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:28.097767  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:28.098020  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:28.098074  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:28.098243  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:28.098254  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:28.098459  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:28.098460  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:28.221319  375205 ssh_runner.go:195] Run: systemctl --version
	I0108 22:15:28.227501  375205 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:15:28.379259  375205 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:15:28.386159  375205 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:15:28.386272  375205 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:15:28.404416  375205 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:15:28.404469  375205 start.go:475] detecting cgroup driver to use...
	I0108 22:15:28.404575  375205 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:15:28.421612  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:15:28.438920  375205 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:15:28.439001  375205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:15:28.455220  375205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:15:28.473982  375205 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:15:28.610132  375205 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:15:28.735485  375205 docker.go:219] disabling docker service ...
	I0108 22:15:28.735627  375205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:15:28.750327  375205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:15:28.768782  375205 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:15:28.891784  375205 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:15:29.006680  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:15:29.023187  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:15:29.043520  375205 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:15:29.043601  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.056442  375205 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:15:29.056525  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.066874  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.077969  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.090310  375205 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:15:29.102253  375205 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:15:29.114920  375205 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:15:29.115022  375205 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:15:29.131677  375205 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:15:29.142326  375205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:15:29.259562  375205 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:15:29.463482  375205 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:15:29.463554  375205 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:15:29.468579  375205 start.go:543] Will wait 60s for crictl version
	I0108 22:15:29.468665  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:29.476630  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:15:29.525900  375205 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:15:29.526053  375205 ssh_runner.go:195] Run: crio --version
	I0108 22:15:29.579948  375205 ssh_runner.go:195] Run: crio --version
	I0108 22:15:29.632573  375205 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0108 22:15:29.634161  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:29.637972  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:29.638472  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:29.638514  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:29.638828  375205 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0108 22:15:29.644170  375205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:29.658242  375205 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:15:29.658302  375205 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:29.701366  375205 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0108 22:15:29.701422  375205 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:15:29.701626  375205 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0108 22:15:29.701685  375205 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.701583  375205 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.701743  375205 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.701674  375205 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.701597  375205 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:29.701743  375205 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.701582  375205 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.703644  375205 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:29.703679  375205 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.703705  375205 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0108 22:15:29.703722  375205 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.703643  375205 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.703651  375205 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.703655  375205 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.703652  375205 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:28.117212  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Start
	I0108 22:15:28.117480  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring networks are active...
	I0108 22:15:28.118363  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring network default is active
	I0108 22:15:28.118783  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring network mk-embed-certs-903819 is active
	I0108 22:15:28.119425  375293 main.go:141] libmachine: (embed-certs-903819) Getting domain xml...
	I0108 22:15:28.120203  375293 main.go:141] libmachine: (embed-certs-903819) Creating domain...
	I0108 22:15:29.474037  375293 main.go:141] libmachine: (embed-certs-903819) Waiting to get IP...
	I0108 22:15:29.475109  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:29.475735  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:29.475862  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:29.475696  376188 retry.go:31] will retry after 284.136631ms: waiting for machine to come up
	I0108 22:15:29.762077  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:29.762586  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:29.762614  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:29.762538  376188 retry.go:31] will retry after 303.052805ms: waiting for machine to come up
	I0108 22:15:30.067299  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:30.067947  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:30.067997  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:30.067822  376188 retry.go:31] will retry after 471.679894ms: waiting for machine to come up
	I0108 22:15:30.541942  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:30.542626  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:30.542658  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:30.542542  376188 retry.go:31] will retry after 534.448155ms: waiting for machine to come up
	I0108 22:15:31.078549  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:31.079168  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:31.079212  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:31.079092  376188 retry.go:31] will retry after 595.348277ms: waiting for machine to come up
	I0108 22:15:31.675832  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:31.676249  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:31.676278  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:31.676209  376188 retry.go:31] will retry after 618.587146ms: waiting for machine to come up
	I0108 22:15:32.296396  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:32.296982  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:32.297011  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:32.296820  376188 retry.go:31] will retry after 730.322233ms: waiting for machine to come up
	I0108 22:15:29.877942  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.891002  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.891714  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.893908  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0108 22:15:29.901880  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.959729  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.975241  375205 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0108 22:15:29.975301  375205 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.975308  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.975351  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.022214  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.074289  375205 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0108 22:15:30.074350  375205 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:30.074422  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.107460  375205 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0108 22:15:30.107547  375205 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:30.107634  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.137086  375205 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0108 22:15:30.137155  375205 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:30.137227  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.156198  375205 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0108 22:15:30.156291  375205 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:30.156357  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163468  375205 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0108 22:15:30.163522  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:30.163532  375205 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:30.163563  375205 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0108 22:15:30.163616  375205 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.163654  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:30.163660  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163762  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:30.163779  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:30.163583  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163849  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:30.304360  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:30.304458  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0108 22:15:30.304478  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0108 22:15:30.304481  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:30.304564  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.304603  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.304568  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:30.304636  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:30.304678  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:30.304738  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:30.307415  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:30.307516  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:30.322465  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0108 22:15:30.322505  375205 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.322616  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.323275  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390462  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0108 22:15:30.390530  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0108 22:15:30.390546  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 22:15:30.390566  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390612  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390651  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:30.390657  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:32.649486  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.326834963s)
	I0108 22:15:32.649532  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0108 22:15:32.649560  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:32.649569  375205 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.258890537s)
	I0108 22:15:32.649612  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:32.649622  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0108 22:15:32.649573  375205 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.258898806s)
	I0108 22:15:32.649638  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0108 22:15:33.028658  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:33.029086  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:33.029117  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:33.029023  376188 retry.go:31] will retry after 1.009306133s: waiting for machine to come up
	I0108 22:15:34.040145  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:34.040574  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:34.040610  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:34.040517  376188 retry.go:31] will retry after 1.215287271s: waiting for machine to come up
	I0108 22:15:35.258130  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:35.258735  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:35.258767  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:35.258669  376188 retry.go:31] will retry after 1.604579686s: waiting for machine to come up
	I0108 22:15:36.865156  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:36.865635  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:36.865671  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:36.865575  376188 retry.go:31] will retry after 1.938816817s: waiting for machine to come up
	I0108 22:15:35.937824  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.288173217s)
	I0108 22:15:35.937859  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0108 22:15:35.937899  375205 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:35.938005  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:38.805792  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:38.806390  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:38.806420  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:38.806318  376188 retry.go:31] will retry after 2.933374936s: waiting for machine to come up
	I0108 22:15:41.741267  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:41.741924  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:41.741962  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:41.741850  376188 retry.go:31] will retry after 3.549554778s: waiting for machine to come up
	I0108 22:15:40.512566  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.574525189s)
	I0108 22:15:40.512605  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0108 22:15:40.512642  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:40.512699  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:43.180687  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.667951486s)
	I0108 22:15:43.180730  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0108 22:15:43.180766  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:43.180849  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:44.539187  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.35830707s)
	I0108 22:15:44.539234  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0108 22:15:44.539274  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:44.539335  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:45.294867  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:45.295522  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:45.295572  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:45.295439  376188 retry.go:31] will retry after 5.642834673s: waiting for machine to come up
	I0108 22:15:46.498360  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.95899411s)
	I0108 22:15:46.498392  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0108 22:15:46.498417  375205 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:46.498473  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:47.553626  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.055107765s)
	I0108 22:15:47.553672  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0108 22:15:47.553708  375205 cache_images.go:123] Successfully loaded all cached images
	I0108 22:15:47.553715  375205 cache_images.go:92] LoadImages completed in 17.852269213s
	I0108 22:15:47.553796  375205 ssh_runner.go:195] Run: crio config
	I0108 22:15:47.626385  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:15:47.626428  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:15:47.626471  375205 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:15:47.626503  375205 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.153 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-675668 NodeName:no-preload-675668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:15:47.626764  375205 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-675668"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:15:47.626889  375205 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-675668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-675668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:15:47.626994  375205 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0108 22:15:47.638161  375205 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:15:47.638263  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:15:47.648004  375205 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0108 22:15:47.667877  375205 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0108 22:15:47.685914  375205 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0108 22:15:47.705814  375205 ssh_runner.go:195] Run: grep 192.168.61.153	control-plane.minikube.internal$ /etc/hosts
	I0108 22:15:47.709842  375205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:47.724788  375205 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668 for IP: 192.168.61.153
	I0108 22:15:47.724877  375205 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:15:47.725349  375205 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:15:47.725420  375205 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:15:47.725541  375205 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.key
	I0108 22:15:47.725626  375205 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.key.0768d075
	I0108 22:15:47.725668  375205 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.key
	I0108 22:15:47.725793  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:15:47.725822  375205 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:15:47.725837  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:15:47.725861  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:15:47.725886  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:15:47.725908  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:15:47.725952  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:47.727130  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:15:47.753432  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:15:47.780962  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:15:47.807446  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:15:47.834334  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:15:47.861638  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:15:47.889479  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:15:47.916119  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:15:47.944635  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:15:47.971740  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:15:47.998594  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:15:48.025907  375205 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:15:48.044525  375205 ssh_runner.go:195] Run: openssl version
	I0108 22:15:48.050542  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:15:48.061205  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.066945  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.067060  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.074266  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:15:48.084613  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:15:48.095856  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.101596  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.101677  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.108991  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:15:48.120690  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:15:48.130747  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.135480  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.135576  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.141462  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:15:48.152597  375205 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:15:48.158657  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:15:48.165978  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:15:48.174164  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:15:48.181140  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:15:48.187819  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:15:48.194088  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:15:48.200487  375205 kubeadm.go:404] StartCluster: {Name:no-preload-675668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-675668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.153 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:15:48.200612  375205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:15:48.200686  375205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:15:48.244804  375205 cri.go:89] found id: ""
	I0108 22:15:48.244894  375205 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:15:48.255502  375205 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:15:48.255549  375205 kubeadm.go:636] restartCluster start
	I0108 22:15:48.255625  375205 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:15:48.265914  375205 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:48.267815  375205 kubeconfig.go:92] found "no-preload-675668" server: "https://192.168.61.153:8443"
	I0108 22:15:48.271555  375205 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:15:48.281619  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:48.281694  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:48.293360  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:48.781917  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:48.782063  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:48.795101  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:49.281683  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:49.281784  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:49.295392  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:49.781910  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:49.782011  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:49.795016  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.309259  375556 start.go:369] acquired machines lock for "default-k8s-diff-port-292054" in 4m6.099929885s
	I0108 22:15:52.309332  375556 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:52.309353  375556 fix.go:54] fixHost starting: 
	I0108 22:15:52.309795  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:52.309827  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:52.327510  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
	I0108 22:15:52.328130  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:52.328844  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:15:52.328877  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:52.329458  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:52.329740  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:15:52.329938  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:15:52.331851  375556 fix.go:102] recreateIfNeeded on default-k8s-diff-port-292054: state=Stopped err=<nil>
	I0108 22:15:52.331887  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	W0108 22:15:52.332071  375556 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:52.334604  375556 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-292054" ...
	I0108 22:15:50.942498  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.943038  375293 main.go:141] libmachine: (embed-certs-903819) Found IP for machine: 192.168.72.132
	I0108 22:15:50.943076  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has current primary IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.943087  375293 main.go:141] libmachine: (embed-certs-903819) Reserving static IP address...
	I0108 22:15:50.943577  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "embed-certs-903819", mac: "52:54:00:73:74:da", ip: "192.168.72.132"} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:50.943606  375293 main.go:141] libmachine: (embed-certs-903819) Reserved static IP address: 192.168.72.132
	I0108 22:15:50.943620  375293 main.go:141] libmachine: (embed-certs-903819) DBG | skip adding static IP to network mk-embed-certs-903819 - found existing host DHCP lease matching {name: "embed-certs-903819", mac: "52:54:00:73:74:da", ip: "192.168.72.132"}
	I0108 22:15:50.943636  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Getting to WaitForSSH function...
	I0108 22:15:50.943655  375293 main.go:141] libmachine: (embed-certs-903819) Waiting for SSH to be available...
	I0108 22:15:50.945879  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.946330  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:50.946362  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.946493  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Using SSH client type: external
	I0108 22:15:50.946532  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa (-rw-------)
	I0108 22:15:50.946589  375293 main.go:141] libmachine: (embed-certs-903819) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:15:50.946606  375293 main.go:141] libmachine: (embed-certs-903819) DBG | About to run SSH command:
	I0108 22:15:50.946641  375293 main.go:141] libmachine: (embed-certs-903819) DBG | exit 0
	I0108 22:15:51.051155  375293 main.go:141] libmachine: (embed-certs-903819) DBG | SSH cmd err, output: <nil>: 
	I0108 22:15:51.051655  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetConfigRaw
	I0108 22:15:51.052363  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:51.054890  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.055247  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.055276  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.055618  375293 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/config.json ...
	I0108 22:15:51.055862  375293 machine.go:88] provisioning docker machine ...
	I0108 22:15:51.055887  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:51.056117  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.056263  375293 buildroot.go:166] provisioning hostname "embed-certs-903819"
	I0108 22:15:51.056283  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.056427  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.058406  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.058775  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.058822  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.058953  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.059154  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.059318  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.059478  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.059654  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.060145  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.060166  375293 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-903819 && echo "embed-certs-903819" | sudo tee /etc/hostname
	I0108 22:15:51.207967  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-903819
	
	I0108 22:15:51.208007  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.210549  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.210848  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.210876  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.211120  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.211372  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.211539  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.211707  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.211879  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.212375  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.212399  375293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-903819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-903819/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-903819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:15:51.356887  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:51.356936  375293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:15:51.356968  375293 buildroot.go:174] setting up certificates
	I0108 22:15:51.356997  375293 provision.go:83] configureAuth start
	I0108 22:15:51.357012  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.357424  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:51.360156  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.360553  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.360590  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.360735  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.363438  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.363850  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.363905  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.364020  375293 provision.go:138] copyHostCerts
	I0108 22:15:51.364111  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:15:51.364126  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:15:51.364193  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:15:51.364332  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:15:51.364347  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:15:51.364376  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:15:51.364453  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:15:51.364463  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:15:51.364490  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:15:51.364552  375293 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.embed-certs-903819 san=[192.168.72.132 192.168.72.132 localhost 127.0.0.1 minikube embed-certs-903819]
	I0108 22:15:51.472949  375293 provision.go:172] copyRemoteCerts
	I0108 22:15:51.473023  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:15:51.473053  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.476622  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.476975  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.476997  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.477269  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.477524  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.477719  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.477852  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:51.576283  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:15:51.604809  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:15:51.633353  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:15:51.660375  375293 provision.go:86] duration metric: configureAuth took 303.352585ms
	I0108 22:15:51.660422  375293 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:15:51.660657  375293 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:15:51.660764  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.664337  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.664738  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.664796  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.665089  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.665394  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.665649  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.665823  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.666047  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.666595  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.666633  375293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:15:52.023397  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:15:52.023450  375293 machine.go:91] provisioned docker machine in 967.568803ms
	I0108 22:15:52.023469  375293 start.go:300] post-start starting for "embed-certs-903819" (driver="kvm2")
	I0108 22:15:52.023485  375293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:15:52.023514  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.023922  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:15:52.023979  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.026998  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.027417  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.027447  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.027665  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.027875  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.028050  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.028240  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.126087  375293 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:15:52.130371  375293 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:15:52.130414  375293 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:15:52.130509  375293 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:15:52.130609  375293 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:15:52.130738  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:15:52.139897  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:52.166648  375293 start.go:303] post-start completed in 143.156785ms
	I0108 22:15:52.166691  375293 fix.go:56] fixHost completed within 24.077726567s
	I0108 22:15:52.166721  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.169452  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.169849  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.169880  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.170156  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.170463  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.170716  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.170909  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.171089  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:52.171520  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:52.171535  375293 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:15:52.309104  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752152.251541184
	
	I0108 22:15:52.309136  375293 fix.go:206] guest clock: 1704752152.251541184
	I0108 22:15:52.309146  375293 fix.go:219] Guest: 2024-01-08 22:15:52.251541184 +0000 UTC Remote: 2024-01-08 22:15:52.166696501 +0000 UTC m=+279.417512277 (delta=84.844683ms)
	I0108 22:15:52.309173  375293 fix.go:190] guest clock delta is within tolerance: 84.844683ms
	I0108 22:15:52.309180  375293 start.go:83] releasing machines lock for "embed-certs-903819", held for 24.220254192s
	I0108 22:15:52.309214  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.309514  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:52.312538  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.312905  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.312928  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.313161  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313692  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313879  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313971  375293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:15:52.314031  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.314154  375293 ssh_runner.go:195] Run: cat /version.json
	I0108 22:15:52.314185  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.316938  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317214  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317363  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.317398  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.317425  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317456  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317599  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.317746  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.317803  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.317882  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.318074  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.318074  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.318273  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.318332  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.451292  375293 ssh_runner.go:195] Run: systemctl --version
	I0108 22:15:52.459839  375293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:15:52.609989  375293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:15:52.617215  375293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:15:52.617326  375293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:15:52.633017  375293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:15:52.633068  375293 start.go:475] detecting cgroup driver to use...
	I0108 22:15:52.633180  375293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:15:52.649947  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:15:52.664459  375293 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:15:52.664530  375293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:15:52.680105  375293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:15:52.696100  375293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:15:52.814616  375293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:15:52.951975  375293 docker.go:219] disabling docker service ...
	I0108 22:15:52.952086  375293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:15:52.967800  375293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:15:52.982903  375293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:15:53.107033  375293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:15:53.222765  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:15:53.238572  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:15:53.260919  375293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:15:53.261035  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.271980  375293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:15:53.272084  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.283693  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.298686  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.310543  375293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:15:53.322108  375293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:15:53.331904  375293 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:15:53.331982  375293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:15:53.347091  375293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:15:53.358365  375293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:15:53.462607  375293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:15:53.658267  375293 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:15:53.658362  375293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:15:53.663859  375293 start.go:543] Will wait 60s for crictl version
	I0108 22:15:53.663941  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:15:53.668413  375293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:15:53.714319  375293 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:15:53.714456  375293 ssh_runner.go:195] Run: crio --version
	I0108 22:15:53.774601  375293 ssh_runner.go:195] Run: crio --version
	I0108 22:15:53.840055  375293 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:15:50.282005  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:50.282118  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:50.296034  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:50.781676  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:50.781865  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:50.794250  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:51.281771  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:51.281866  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:51.296593  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:51.782094  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:51.782193  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:51.797110  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.281711  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:52.281844  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:52.294916  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.782076  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:52.782193  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:52.796700  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:53.282191  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:53.282320  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:53.300226  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:53.781708  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:53.781807  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:53.794426  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:54.281901  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:54.282005  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:54.305276  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:54.781646  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:54.781765  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:54.798991  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.336203  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Start
	I0108 22:15:52.336440  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring networks are active...
	I0108 22:15:52.337318  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring network default is active
	I0108 22:15:52.337660  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring network mk-default-k8s-diff-port-292054 is active
	I0108 22:15:52.338019  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Getting domain xml...
	I0108 22:15:52.338689  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Creating domain...
	I0108 22:15:53.715046  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting to get IP...
	I0108 22:15:53.716237  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.716849  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.716944  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:53.716801  376345 retry.go:31] will retry after 252.013763ms: waiting for machine to come up
	I0108 22:15:53.970408  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.971019  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.971049  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:53.970958  376345 retry.go:31] will retry after 266.473735ms: waiting for machine to come up
	I0108 22:15:54.239763  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.240226  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.240251  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:54.240173  376345 retry.go:31] will retry after 429.57645ms: waiting for machine to come up
	I0108 22:15:54.672202  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.672716  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.672752  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:54.672669  376345 retry.go:31] will retry after 585.093805ms: waiting for machine to come up
	I0108 22:15:55.259153  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.259706  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.259743  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:55.259654  376345 retry.go:31] will retry after 689.434093ms: waiting for machine to come up
	I0108 22:15:55.950610  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.951205  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.951239  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:55.951157  376345 retry.go:31] will retry after 895.874654ms: waiting for machine to come up
	I0108 22:15:53.841949  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:53.845797  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:53.846200  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:53.846248  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:53.846494  375293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0108 22:15:53.851791  375293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:53.866130  375293 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:15:53.866207  375293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:53.932186  375293 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:15:53.932311  375293 ssh_runner.go:195] Run: which lz4
	I0108 22:15:53.937259  375293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:15:53.944022  375293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:15:53.944077  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:15:55.993976  375293 crio.go:444] Took 2.056742 seconds to copy over tarball
	I0108 22:15:55.994073  375293 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:15:55.281653  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:55.281788  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:55.303179  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:55.781655  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:55.781803  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:55.801287  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:56.281804  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:56.281897  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:56.306479  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:56.782123  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:56.782248  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:56.799241  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:57.281778  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:57.281926  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:57.299917  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:57.782255  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:57.782392  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:57.797960  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:58.282738  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:58.282919  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:58.300271  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:58.300333  375205 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:15:58.300349  375205 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:15:58.300365  375205 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:15:58.300452  375205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:15:58.353658  375205 cri.go:89] found id: ""
	I0108 22:15:58.353755  375205 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:15:58.372503  375205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:15:58.393266  375205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:15:58.393366  375205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:15:58.406210  375205 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:15:58.406255  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:58.570457  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:59.811449  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.240942109s)
	I0108 22:15:59.811494  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:56.848455  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:56.848893  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:56.848925  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:56.848869  376345 retry.go:31] will retry after 1.095460706s: waiting for machine to come up
	I0108 22:15:57.946534  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:57.947045  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:57.947084  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:57.947000  376345 retry.go:31] will retry after 975.046116ms: waiting for machine to come up
	I0108 22:15:58.923872  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:58.924402  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:58.924436  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:58.924351  376345 retry.go:31] will retry after 1.855498831s: waiting for machine to come up
	I0108 22:16:00.781295  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:00.781813  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:00.781842  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:00.781745  376345 retry.go:31] will retry after 1.560909915s: waiting for machine to come up
	I0108 22:15:59.648230  375293 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.654100182s)
	I0108 22:15:59.648275  375293 crio.go:451] Took 3.654264 seconds to extract the tarball
	I0108 22:15:59.648293  375293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:15:59.707614  375293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:59.763291  375293 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:15:59.763318  375293 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:15:59.763416  375293 ssh_runner.go:195] Run: crio config
	I0108 22:15:59.840951  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:15:59.840986  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:15:59.841015  375293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:15:59.841038  375293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-903819 NodeName:embed-certs-903819 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:15:59.841205  375293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-903819"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:15:59.841283  375293 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-903819 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-903819 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:15:59.841341  375293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:15:59.854399  375293 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:15:59.854521  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:15:59.864630  375293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0108 22:15:59.887590  375293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:15:59.907618  375293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0108 22:15:59.930429  375293 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I0108 22:15:59.935347  375293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:59.954840  375293 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819 for IP: 192.168.72.132
	I0108 22:15:59.954893  375293 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:15:59.955092  375293 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:15:59.955151  375293 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:15:59.955277  375293 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/client.key
	I0108 22:15:59.955460  375293 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.key.b7fe571d
	I0108 22:15:59.955557  375293 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.key
	I0108 22:15:59.955780  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:15:59.955832  375293 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:15:59.955855  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:15:59.955897  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:15:59.955931  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:15:59.955962  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:15:59.956023  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:59.957003  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:15:59.984268  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:16:00.018065  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:00.049758  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:00.079731  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:00.115904  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:00.148655  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:00.186204  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:00.224356  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:00.258906  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:00.293420  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:00.328219  375293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:00.351811  375293 ssh_runner.go:195] Run: openssl version
	I0108 22:16:00.360327  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:00.373384  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.381553  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.381653  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.391609  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:00.406242  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:00.419455  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.426093  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.426218  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.433793  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:00.446550  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:00.463174  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.470386  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.470471  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.477752  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:00.492003  375293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:00.498273  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:00.506305  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:00.515120  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:00.523909  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:00.531966  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:00.540080  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:00.547673  375293 kubeadm.go:404] StartCluster: {Name:embed-certs-903819 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-903819 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:00.547852  375293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:00.547933  375293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:00.596555  375293 cri.go:89] found id: ""
	I0108 22:16:00.596644  375293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:00.607985  375293 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:00.608023  375293 kubeadm.go:636] restartCluster start
	I0108 22:16:00.608092  375293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:00.620669  375293 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:00.621860  375293 kubeconfig.go:92] found "embed-certs-903819" server: "https://192.168.72.132:8443"
	I0108 22:16:00.624246  375293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:00.638481  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:00.638578  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:00.658261  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:01.138670  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:01.138876  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:01.154778  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:01.639152  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:01.639290  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:01.659301  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:02.138679  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:02.138871  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:02.159427  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:02.638859  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:02.638970  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:02.660608  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:00.076906  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:00.244500  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:00.356164  375205 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:00.356290  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:00.856674  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:01.356420  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:01.857416  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:02.356778  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:02.857385  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:03.356493  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:03.379896  375205 api_server.go:72] duration metric: took 3.023730091s to wait for apiserver process to appear ...
	I0108 22:16:03.379953  375205 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:03.380023  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:02.344786  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:02.345408  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:02.345444  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:02.345339  376345 retry.go:31] will retry after 2.336202352s: waiting for machine to come up
	I0108 22:16:04.685192  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:04.685894  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:04.685947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:04.685809  376345 retry.go:31] will retry after 3.559467663s: waiting for machine to come up
	I0108 22:16:03.139113  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:03.139272  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:03.158043  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:03.638583  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:03.638737  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:03.659573  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:04.139075  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:04.139225  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:04.158993  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:04.638600  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:04.638766  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:04.657099  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:05.138627  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:05.138728  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:05.156654  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:05.639289  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:05.639436  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:05.658060  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:06.139303  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:06.139466  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:06.153866  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:06.638492  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:06.638651  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:06.656088  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.138685  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:07.138840  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:07.158365  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.638744  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:07.638838  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:07.656010  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.463229  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:07.463273  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:07.463299  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:07.534774  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:07.534812  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:07.880243  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:07.886835  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:07.886881  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:08.380688  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:08.385776  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:08.385821  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:08.880979  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:08.890142  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:08.890180  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:09.380526  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:09.385856  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 200:
	ok
	I0108 22:16:09.394800  375205 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:16:09.394838  375205 api_server.go:131] duration metric: took 6.014875532s to wait for apiserver health ...
	I0108 22:16:09.394851  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:16:09.394861  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:09.396785  375205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:09.398197  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:09.422683  375205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:09.464557  375205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:09.483416  375205 system_pods.go:59] 8 kube-system pods found
	I0108 22:16:09.483460  375205 system_pods.go:61] "coredns-76f75df574-v8fsw" [7d69f8ec-6684-49d0-8567-4032298a4e5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:09.483471  375205 system_pods.go:61] "etcd-no-preload-675668" [bc088c6e-5037-4e51-a021-2c5ac3c1c60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:09.483488  375205 system_pods.go:61] "kube-apiserver-no-preload-675668" [0bbdf118-c47c-4298-ae5e-a984729ec21e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:09.483497  375205 system_pods.go:61] "kube-controller-manager-no-preload-675668" [2c3ff259-60a7-4205-b55f-85fe2d8e340d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:09.483513  375205 system_pods.go:61] "kube-proxy-dnbvk" [1803ec6b-5bd3-4ebb-bfd5-3a1356a1f168] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:09.483522  375205 system_pods.go:61] "kube-scheduler-no-preload-675668" [47737c5e-b59a-4df0-ac7c-36525e17733c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:09.483532  375205 system_pods.go:61] "metrics-server-57f55c9bc5-pk8bm" [71c7c744-c5fa-41e7-a26f-c04c30379b97] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:09.483537  375205 system_pods.go:61] "storage-provisioner" [1266430c-beda-4fa1-a057-cb07b8bf1f89] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:09.483547  375205 system_pods.go:74] duration metric: took 18.952011ms to wait for pod list to return data ...
	I0108 22:16:09.483562  375205 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:09.502939  375205 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:09.502989  375205 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:09.503007  375205 node_conditions.go:105] duration metric: took 19.439582ms to run NodePressure ...
	I0108 22:16:09.503031  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:08.246675  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:08.247243  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:08.247302  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:08.247185  376345 retry.go:31] will retry after 3.860632675s: waiting for machine to come up
	I0108 22:16:08.139286  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:08.139413  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:08.155694  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:08.639385  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:08.639521  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:08.655368  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:09.139022  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:09.139171  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:09.153512  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:09.638642  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:09.638765  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:09.653202  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.138833  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:10.138924  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:10.153529  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.639273  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:10.639462  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:10.655947  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.655981  375293 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:10.655991  375293 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:10.656003  375293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:10.656082  375293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:10.706638  375293 cri.go:89] found id: ""
	I0108 22:16:10.706721  375293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:10.726540  375293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:10.739540  375293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:10.739619  375293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:10.751112  375293 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:10.751158  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:10.877306  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.453755  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.664034  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.778440  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.866216  375293 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:11.866364  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:12.366749  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.862826  374880 start.go:369] acquired machines lock for "old-k8s-version-079759" in 1m1.534060396s
	I0108 22:16:13.862908  374880 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:16:13.862922  374880 fix.go:54] fixHost starting: 
	I0108 22:16:13.863465  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:16:13.863514  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:16:13.890658  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0108 22:16:13.891256  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:16:13.891974  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:16:13.891997  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:16:13.892356  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:16:13.892526  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:13.892634  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:16:13.894503  374880 fix.go:102] recreateIfNeeded on old-k8s-version-079759: state=Stopped err=<nil>
	I0108 22:16:13.894532  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	W0108 22:16:13.894707  374880 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:16:13.896778  374880 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-079759" ...
	I0108 22:16:13.898346  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Start
	I0108 22:16:13.898517  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring networks are active...
	I0108 22:16:13.899441  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring network default is active
	I0108 22:16:13.899906  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring network mk-old-k8s-version-079759 is active
	I0108 22:16:13.900424  374880 main.go:141] libmachine: (old-k8s-version-079759) Getting domain xml...
	I0108 22:16:13.901232  374880 main.go:141] libmachine: (old-k8s-version-079759) Creating domain...
	I0108 22:16:10.069721  375205 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:10.077465  375205 kubeadm.go:787] kubelet initialised
	I0108 22:16:10.077494  375205 kubeadm.go:788] duration metric: took 7.739231ms waiting for restarted kubelet to initialise ...
	I0108 22:16:10.077503  375205 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:10.085099  375205 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-v8fsw" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:12.095498  375205 pod_ready.go:102] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:14.100054  375205 pod_ready.go:102] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:12.111578  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.112089  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Found IP for machine: 192.168.50.18
	I0108 22:16:12.112118  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Reserving static IP address...
	I0108 22:16:12.112138  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has current primary IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.112627  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-292054", mac: "52:54:00:8d:25:78", ip: "192.168.50.18"} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.112660  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Reserved static IP address: 192.168.50.18
	I0108 22:16:12.112684  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | skip adding static IP to network mk-default-k8s-diff-port-292054 - found existing host DHCP lease matching {name: "default-k8s-diff-port-292054", mac: "52:54:00:8d:25:78", ip: "192.168.50.18"}
	I0108 22:16:12.112706  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Getting to WaitForSSH function...
	I0108 22:16:12.112729  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for SSH to be available...
	I0108 22:16:12.115245  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.115723  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.115762  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.115881  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Using SSH client type: external
	I0108 22:16:12.115917  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa (-rw-------)
	I0108 22:16:12.115947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:16:12.115967  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | About to run SSH command:
	I0108 22:16:12.116013  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | exit 0
	I0108 22:16:12.221209  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | SSH cmd err, output: <nil>: 
	I0108 22:16:12.221755  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetConfigRaw
	I0108 22:16:12.222634  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:12.225565  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.226008  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.226036  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.226326  375556 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:16:12.226626  375556 machine.go:88] provisioning docker machine ...
	I0108 22:16:12.226658  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:12.226946  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.227160  375556 buildroot.go:166] provisioning hostname "default-k8s-diff-port-292054"
	I0108 22:16:12.227187  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.227381  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.230424  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.230867  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.230918  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.231036  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.231302  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.231511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.231674  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.231856  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:12.232448  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:12.232476  375556 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-292054 && echo "default-k8s-diff-port-292054" | sudo tee /etc/hostname
	I0108 22:16:12.382972  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-292054
	
	I0108 22:16:12.383015  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.386658  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.387055  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.387110  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.387410  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.387780  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.388020  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.388284  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.388576  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:12.388935  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:12.388954  375556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-292054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-292054/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-292054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:12.536473  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:12.536514  375556 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:16:12.536597  375556 buildroot.go:174] setting up certificates
	I0108 22:16:12.536619  375556 provision.go:83] configureAuth start
	I0108 22:16:12.536638  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.536995  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:12.540248  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.540775  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.540813  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.540982  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.544343  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.544924  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.544986  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.545143  375556 provision.go:138] copyHostCerts
	I0108 22:16:12.545241  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:16:12.545256  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:16:12.545329  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:16:12.545468  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:16:12.545485  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:16:12.545525  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:16:12.545603  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:16:12.545612  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:16:12.545630  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:16:12.545717  375556 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-292054 san=[192.168.50.18 192.168.50.18 localhost 127.0.0.1 minikube default-k8s-diff-port-292054]
	I0108 22:16:12.853268  375556 provision.go:172] copyRemoteCerts
	I0108 22:16:12.853332  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:12.853359  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.856503  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.856926  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.856959  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.857295  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.857536  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.857699  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.857904  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:12.961751  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:12.999065  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 22:16:13.037282  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:16:13.075006  375556 provision.go:86] duration metric: configureAuth took 538.367435ms
	I0108 22:16:13.075048  375556 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:13.075403  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:16:13.075509  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.078643  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.079141  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.079187  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.079518  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.079765  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.079976  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.080145  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.080388  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:13.080860  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:13.080891  375556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:16:13.523316  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:16:13.523355  375556 machine.go:91] provisioned docker machine in 1.296708962s
	I0108 22:16:13.523391  375556 start.go:300] post-start starting for "default-k8s-diff-port-292054" (driver="kvm2")
	I0108 22:16:13.523427  375556 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:16:13.523458  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.523937  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:16:13.523982  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.528392  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.528941  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.529005  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.529344  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.529715  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.529947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.530160  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:13.644605  375556 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:16:13.651917  375556 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:16:13.651970  375556 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:16:13.652120  375556 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:16:13.652268  375556 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:16:13.652452  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:16:13.667715  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:13.707995  375556 start.go:303] post-start completed in 184.580746ms
	I0108 22:16:13.708032  375556 fix.go:56] fixHost completed within 21.398677633s
	I0108 22:16:13.708061  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.712186  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.712754  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.712785  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.713001  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.713308  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.713572  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.713784  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.714062  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:13.714576  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:13.714597  375556 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:16:13.862558  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752173.800899341
	
	I0108 22:16:13.862600  375556 fix.go:206] guest clock: 1704752173.800899341
	I0108 22:16:13.862613  375556 fix.go:219] Guest: 2024-01-08 22:16:13.800899341 +0000 UTC Remote: 2024-01-08 22:16:13.708038237 +0000 UTC m=+267.678081968 (delta=92.861104ms)
	I0108 22:16:13.862688  375556 fix.go:190] guest clock delta is within tolerance: 92.861104ms
	I0108 22:16:13.862700  375556 start.go:83] releasing machines lock for "default-k8s-diff-port-292054", held for 21.553389859s
	I0108 22:16:13.862760  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.863344  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:13.867702  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.868132  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.868160  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.868553  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869294  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869606  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869710  375556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:16:13.869908  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.870024  375556 ssh_runner.go:195] Run: cat /version.json
	I0108 22:16:13.870055  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.874047  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.874604  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.874637  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876082  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876102  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.876135  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.876339  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876083  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.876354  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.876518  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.876771  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.876808  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.876928  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:13.877140  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:14.020544  375556 ssh_runner.go:195] Run: systemctl --version
	I0108 22:16:14.030180  375556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:16:14.192218  375556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:16:14.200925  375556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:16:14.201038  375556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:16:14.223169  375556 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:16:14.223200  375556 start.go:475] detecting cgroup driver to use...
	I0108 22:16:14.223274  375556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:16:14.246782  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:16:14.264283  375556 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:16:14.264417  375556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:16:14.281460  375556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:16:14.295968  375556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:16:14.443907  375556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:16:14.611299  375556 docker.go:219] disabling docker service ...
	I0108 22:16:14.611425  375556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:16:14.630493  375556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:16:14.649912  375556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:16:14.787666  375556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:16:14.971826  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:16:15.004969  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:16:15.032889  375556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:16:15.032982  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.050131  375556 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:16:15.050223  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.066011  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.082365  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.098387  375556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:16:15.115648  375556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:16:15.129675  375556 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:16:15.129848  375556 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:16:15.151333  375556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:16:15.165637  375556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:16:15.308416  375556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:16:15.580204  375556 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:16:15.580284  375556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:16:15.587895  375556 start.go:543] Will wait 60s for crictl version
	I0108 22:16:15.588108  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:16:15.594471  375556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:16:15.645175  375556 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:16:15.645273  375556 ssh_runner.go:195] Run: crio --version
	I0108 22:16:15.707630  375556 ssh_runner.go:195] Run: crio --version
	I0108 22:16:15.779275  375556 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:16:15.781032  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:15.784486  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:15.784896  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:15.784965  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:15.785126  375556 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0108 22:16:15.790707  375556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:15.810441  375556 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:16:15.810515  375556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:15.867423  375556 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:16:15.867591  375556 ssh_runner.go:195] Run: which lz4
	I0108 22:16:15.873029  375556 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:16:15.879394  375556 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:16:15.879500  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:16:12.867258  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.367211  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.866433  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.366622  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.866611  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.907073  375293 api_server.go:72] duration metric: took 3.040854669s to wait for apiserver process to appear ...
	I0108 22:16:14.907116  375293 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:14.907141  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:15.738179  374880 main.go:141] libmachine: (old-k8s-version-079759) Waiting to get IP...
	I0108 22:16:15.739231  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:15.739808  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:15.739893  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:15.739787  376492 retry.go:31] will retry after 271.587986ms: waiting for machine to come up
	I0108 22:16:16.013648  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.014344  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.014388  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.014267  376492 retry.go:31] will retry after 376.425749ms: waiting for machine to come up
	I0108 22:16:16.392497  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.392985  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.393013  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.392894  376492 retry.go:31] will retry after 340.776058ms: waiting for machine to come up
	I0108 22:16:16.735696  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.736412  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.736452  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.736349  376492 retry.go:31] will retry after 559.6759ms: waiting for machine to come up
	I0108 22:16:17.297397  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:17.297990  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:17.298027  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:17.297965  376492 retry.go:31] will retry after 738.214425ms: waiting for machine to come up
	I0108 22:16:18.038578  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:18.039239  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:18.039269  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:18.039120  376492 retry.go:31] will retry after 762.268706ms: waiting for machine to come up
	I0108 22:16:18.803986  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:18.804560  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:18.804589  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:18.804438  376492 retry.go:31] will retry after 1.027542644s: waiting for machine to come up
	I0108 22:16:15.104174  375205 pod_ready.go:92] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:15.104208  375205 pod_ready.go:81] duration metric: took 5.01907031s waiting for pod "coredns-76f75df574-v8fsw" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:15.104223  375205 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:17.117526  375205 pod_ready.go:102] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:19.615842  375205 pod_ready.go:102] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:17.971748  375556 crio.go:444] Took 2.098761 seconds to copy over tarball
	I0108 22:16:17.971905  375556 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:16:19.481826  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:19.481865  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:19.481883  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:19.529381  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:19.529427  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:19.907613  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:19.914772  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:19.914824  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:20.407461  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:20.418184  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:20.418238  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:20.908072  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:20.920042  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:20.920085  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:21.407506  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:21.414375  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I0108 22:16:21.428398  375293 api_server.go:141] control plane version: v1.28.4
	I0108 22:16:21.428439  375293 api_server.go:131] duration metric: took 6.521312808s to wait for apiserver health ...
	I0108 22:16:21.428451  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:16:21.428460  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:21.920874  375293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:22.268512  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:22.284953  375293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:22.309346  375293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:22.465452  375293 system_pods.go:59] 9 kube-system pods found
	I0108 22:16:22.465501  375293 system_pods.go:61] "coredns-5dd5756b68-wxfs6" [965cab31-c39a-4885-bc6f-6575fe026794] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:22.465516  375293 system_pods.go:61] "coredns-5dd5756b68-zbjfn" [1b521296-8e4c-4252-a729-5727cd71d3f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:22.465534  375293 system_pods.go:61] "etcd-embed-certs-903819" [be30d1b3-e4a8-4daf-9c0e-f3b776499471] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:22.465546  375293 system_pods.go:61] "kube-apiserver-embed-certs-903819" [530546d9-1cec-45f5-9e3e-f5d08e913cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:22.465563  375293 system_pods.go:61] "kube-controller-manager-embed-certs-903819" [bb0d60c9-cdaf-491d-aa20-5a522f351e17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:22.465573  375293 system_pods.go:61] "kube-proxy-gjlx8" [9247e922-69de-4e59-a6d2-06c791d43031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:22.465586  375293 system_pods.go:61] "kube-scheduler-embed-certs-903819" [1aa50057-5aa4-44b2-a762-6f0eee5b3856] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:22.465602  375293 system_pods.go:61] "metrics-server-57f55c9bc5-jswgz" [8f18e01f-981d-48fe-9ce6-5155794da657] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:22.465614  375293 system_pods.go:61] "storage-provisioner" [ea2ac609-5857-4597-9432-e2f4f4630ee2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:22.465629  375293 system_pods.go:74] duration metric: took 156.242171ms to wait for pod list to return data ...
	I0108 22:16:22.465643  375293 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:22.523465  375293 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:22.523529  375293 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:22.523552  375293 node_conditions.go:105] duration metric: took 57.897769ms to run NodePressure ...
	I0108 22:16:22.523585  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:19.833814  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:19.834296  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:19.834341  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:19.834229  376492 retry.go:31] will retry after 1.469300536s: waiting for machine to come up
	I0108 22:16:21.305138  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:21.305962  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:21.306001  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:21.305834  376492 retry.go:31] will retry after 1.215696449s: waiting for machine to come up
	I0108 22:16:22.523937  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:22.524780  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:22.524813  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:22.524676  376492 retry.go:31] will retry after 1.652609537s: waiting for machine to come up
	I0108 22:16:24.179958  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:24.180881  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:24.180910  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:24.180780  376492 retry.go:31] will retry after 2.03835476s: waiting for machine to come up
	I0108 22:16:21.115112  375205 pod_ready.go:92] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.115153  375205 pod_ready.go:81] duration metric: took 6.010921481s waiting for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.115169  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.130056  375205 pod_ready.go:92] pod "kube-apiserver-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.130113  375205 pod_ready.go:81] duration metric: took 14.932775ms waiting for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.130137  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.149011  375205 pod_ready.go:92] pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.149054  375205 pod_ready.go:81] duration metric: took 18.905543ms waiting for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.149071  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dnbvk" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.162994  375205 pod_ready.go:92] pod "kube-proxy-dnbvk" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.163037  375205 pod_ready.go:81] duration metric: took 13.956516ms waiting for pod "kube-proxy-dnbvk" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.163053  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.172926  375205 pod_ready.go:92] pod "kube-scheduler-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.172975  375205 pod_ready.go:81] duration metric: took 9.906476ms waiting for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.172991  375205 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:23.182086  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:22.162439  375556 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.190451334s)
	I0108 22:16:22.162503  375556 crio.go:451] Took 4.190696 seconds to extract the tarball
	I0108 22:16:22.162522  375556 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:16:22.212617  375556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:22.290948  375556 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:16:22.290982  375556 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:16:22.291067  375556 ssh_runner.go:195] Run: crio config
	I0108 22:16:22.361099  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:16:22.361135  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:22.361166  375556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:16:22.361192  375556 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.18 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-292054 NodeName:default-k8s-diff-port-292054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:16:22.361488  375556 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.18
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-292054"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:16:22.361599  375556 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-292054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 22:16:22.361681  375556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:16:22.376350  375556 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:16:22.376489  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:16:22.389808  375556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0108 22:16:22.414305  375556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:16:22.433716  375556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0108 22:16:22.461925  375556 ssh_runner.go:195] Run: grep 192.168.50.18	control-plane.minikube.internal$ /etc/hosts
	I0108 22:16:22.467236  375556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:22.484487  375556 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054 for IP: 192.168.50.18
	I0108 22:16:22.484537  375556 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:16:22.484688  375556 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:16:22.484724  375556 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:16:22.484794  375556 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/client.key
	I0108 22:16:22.484845  375556 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.key.4ed28ecc
	I0108 22:16:22.484886  375556 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.key
	I0108 22:16:22.485012  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:16:22.485042  375556 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:16:22.485056  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:16:22.485077  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:16:22.485107  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:16:22.485133  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:16:22.485182  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:22.485917  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:16:22.516640  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:16:22.554723  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:22.589730  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:22.624933  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:22.656950  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:22.691213  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:22.725882  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:22.757465  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:22.789479  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:22.818877  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:22.848834  375556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:22.869951  375556 ssh_runner.go:195] Run: openssl version
	I0108 22:16:22.877921  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:22.892998  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.899697  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.899798  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.906225  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:22.918957  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:22.930809  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.937461  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.937595  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.945257  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:22.956453  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:22.969894  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.976162  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.976249  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.983601  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:22.995487  375556 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:23.002869  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:23.011231  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:23.019450  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:23.028645  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:23.036530  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:23.044216  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:23.050779  375556 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:23.050875  375556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:23.050968  375556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:23.098736  375556 cri.go:89] found id: ""
	I0108 22:16:23.098806  375556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:23.110702  375556 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:23.110738  375556 kubeadm.go:636] restartCluster start
	I0108 22:16:23.110807  375556 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:23.122131  375556 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.124018  375556 kubeconfig.go:92] found "default-k8s-diff-port-292054" server: "https://192.168.50.18:8444"
	I0108 22:16:23.127827  375556 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:23.141921  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:23.142029  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:23.155738  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.642320  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:23.642416  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:23.655783  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:24.142361  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:24.142522  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:24.161739  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:24.642247  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:24.642392  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:24.659564  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:25.142097  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:25.142341  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:25.156773  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:25.642249  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:25.642362  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:25.655785  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.802042  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.278422708s)
	I0108 22:16:23.802099  375293 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:23.816719  375293 kubeadm.go:787] kubelet initialised
	I0108 22:16:23.816770  375293 kubeadm.go:788] duration metric: took 14.659036ms waiting for restarted kubelet to initialise ...
	I0108 22:16:23.816787  375293 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:23.831999  375293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:25.843652  375293 pod_ready.go:102] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:26.220729  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:26.221388  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:26.221424  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:26.221322  376492 retry.go:31] will retry after 2.215929666s: waiting for machine to come up
	I0108 22:16:28.440185  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:28.440859  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:28.440894  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:28.440781  376492 retry.go:31] will retry after 4.455149908s: waiting for machine to come up
	I0108 22:16:25.184929  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:27.682851  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:29.685033  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:26.142553  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:26.142728  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:26.160691  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:26.642356  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:26.642469  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:26.656481  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.142104  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:27.142265  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:27.157378  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.642473  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:27.642577  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:27.656662  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:28.142925  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:28.143080  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:28.160815  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:28.642072  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:28.642188  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:28.662580  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:29.142008  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:29.142158  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:29.161132  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:29.642780  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:29.642919  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:29.661247  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:30.142588  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:30.142747  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:30.159262  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:30.642472  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:30.642650  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:30.659741  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.847129  375293 pod_ready.go:102] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:30.347456  375293 pod_ready.go:92] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:30.347490  375293 pod_ready.go:81] duration metric: took 6.51546229s waiting for pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.347501  375293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.354929  375293 pod_ready.go:92] pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:30.354955  375293 pod_ready.go:81] duration metric: took 7.447354ms waiting for pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.354965  375293 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.867755  375293 pod_ready.go:92] pod "etcd-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.867788  375293 pod_ready.go:81] duration metric: took 1.512815387s waiting for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.867801  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.875662  375293 pod_ready.go:92] pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.875711  375293 pod_ready.go:81] duration metric: took 7.899159ms waiting for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.875730  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.885348  375293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.885395  375293 pod_ready.go:81] duration metric: took 9.655438ms waiting for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.885410  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gjlx8" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.943389  375293 pod_ready.go:92] pod "kube-proxy-gjlx8" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.943424  375293 pod_ready.go:81] duration metric: took 58.006295ms waiting for pod "kube-proxy-gjlx8" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.943435  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.337716  375293 pod_ready.go:92] pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:32.337752  375293 pod_ready.go:81] duration metric: took 394.305103ms waiting for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.337763  375293 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.901098  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:32.901564  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:32.901601  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:32.901488  376492 retry.go:31] will retry after 3.655042594s: waiting for machine to come up
	I0108 22:16:32.182102  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:34.685634  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:31.142410  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:31.142532  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:31.156191  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:31.642990  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:31.643137  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:31.656623  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:32.142116  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:32.142225  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:32.155597  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:32.642804  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:32.642897  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:32.656038  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:33.142630  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:33.142742  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:33.155977  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:33.156022  375556 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:33.156049  375556 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:33.156064  375556 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:33.156127  375556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:33.205442  375556 cri.go:89] found id: ""
	I0108 22:16:33.205556  375556 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:33.225775  375556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:33.236014  375556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:33.236122  375556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:33.246331  375556 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:33.246385  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:33.389338  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.044093  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.279910  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.436859  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.536169  375556 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:34.536274  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:35.036740  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:35.536732  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:36.036604  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:34.346227  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.347971  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.558150  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.558817  374880 main.go:141] libmachine: (old-k8s-version-079759) Found IP for machine: 192.168.39.183
	I0108 22:16:36.558839  374880 main.go:141] libmachine: (old-k8s-version-079759) Reserving static IP address...
	I0108 22:16:36.558855  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has current primary IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.559397  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "old-k8s-version-079759", mac: "52:54:00:79:02:7b", ip: "192.168.39.183"} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.559451  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | skip adding static IP to network mk-old-k8s-version-079759 - found existing host DHCP lease matching {name: "old-k8s-version-079759", mac: "52:54:00:79:02:7b", ip: "192.168.39.183"}
	I0108 22:16:36.559471  374880 main.go:141] libmachine: (old-k8s-version-079759) Reserved static IP address: 192.168.39.183
	I0108 22:16:36.559495  374880 main.go:141] libmachine: (old-k8s-version-079759) Waiting for SSH to be available...
	I0108 22:16:36.559511  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Getting to WaitForSSH function...
	I0108 22:16:36.562077  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.562439  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.562496  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.562806  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Using SSH client type: external
	I0108 22:16:36.562846  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa (-rw-------)
	I0108 22:16:36.562938  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:16:36.562985  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | About to run SSH command:
	I0108 22:16:36.563005  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | exit 0
	I0108 22:16:36.655957  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | SSH cmd err, output: <nil>: 
	I0108 22:16:36.656393  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetConfigRaw
	I0108 22:16:36.657349  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:36.660624  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.661056  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.661097  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.661415  374880 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/config.json ...
	I0108 22:16:36.661673  374880 machine.go:88] provisioning docker machine ...
	I0108 22:16:36.661699  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:36.662007  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.662224  374880 buildroot.go:166] provisioning hostname "old-k8s-version-079759"
	I0108 22:16:36.662249  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.662416  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.665572  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.666013  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.666056  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.666311  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:36.666582  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.666770  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.666945  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:36.667141  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:36.667677  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:36.667700  374880 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-079759 && echo "old-k8s-version-079759" | sudo tee /etc/hostname
	I0108 22:16:36.813113  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-079759
	
	I0108 22:16:36.813174  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.816444  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.816774  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.816814  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.816995  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:36.817323  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.817559  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.817739  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:36.817969  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:36.818431  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:36.818461  374880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-079759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-079759/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-079759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:36.952252  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:36.952306  374880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:16:36.952343  374880 buildroot.go:174] setting up certificates
	I0108 22:16:36.952359  374880 provision.go:83] configureAuth start
	I0108 22:16:36.952372  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.952803  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:36.955895  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.956276  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.956310  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.956579  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.959251  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.959667  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.959723  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.959825  374880 provision.go:138] copyHostCerts
	I0108 22:16:36.959896  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:16:36.959909  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:16:36.959987  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:16:36.960106  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:16:36.960122  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:16:36.960152  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:16:36.960240  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:16:36.960251  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:16:36.960286  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:16:36.960370  374880 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-079759 san=[192.168.39.183 192.168.39.183 localhost 127.0.0.1 minikube old-k8s-version-079759]
	I0108 22:16:37.054312  374880 provision.go:172] copyRemoteCerts
	I0108 22:16:37.054396  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:37.054428  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.058048  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.058545  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.058580  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.058823  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.059165  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.059439  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.059614  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.158033  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:16:37.190220  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:37.219035  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 22:16:37.246894  374880 provision.go:86] duration metric: configureAuth took 294.516334ms
	I0108 22:16:37.246938  374880 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:37.247165  374880 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:16:37.247269  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.250766  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.251305  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.251344  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.251654  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.251992  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.252253  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.252456  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.252701  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:37.253066  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:37.253091  374880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:16:37.626837  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:16:37.626886  374880 machine.go:91] provisioned docker machine in 965.198968ms
	I0108 22:16:37.626899  374880 start.go:300] post-start starting for "old-k8s-version-079759" (driver="kvm2")
	I0108 22:16:37.626924  374880 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:16:37.626991  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.627562  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:16:37.627626  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.631567  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.631840  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.631876  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.632070  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.632322  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.632578  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.632749  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.732984  374880 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:16:37.740111  374880 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:16:37.740158  374880 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:16:37.740268  374880 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:16:37.740384  374880 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:16:37.740527  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:16:37.751840  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:37.780796  374880 start.go:303] post-start completed in 153.87709ms
	I0108 22:16:37.780833  374880 fix.go:56] fixHost completed within 23.917911044s
	I0108 22:16:37.780861  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.784200  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.784663  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.784698  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.784916  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.785192  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.785482  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.785652  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.785819  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:37.786310  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:37.786334  374880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:16:37.908632  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752197.846451761
	
	I0108 22:16:37.908664  374880 fix.go:206] guest clock: 1704752197.846451761
	I0108 22:16:37.908677  374880 fix.go:219] Guest: 2024-01-08 22:16:37.846451761 +0000 UTC Remote: 2024-01-08 22:16:37.780837729 +0000 UTC m=+368.040141999 (delta=65.614032ms)
	I0108 22:16:37.908740  374880 fix.go:190] guest clock delta is within tolerance: 65.614032ms
	I0108 22:16:37.908756  374880 start.go:83] releasing machines lock for "old-k8s-version-079759", held for 24.045885784s
	I0108 22:16:37.908801  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.909113  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:37.912363  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.912708  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.912745  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.913058  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913581  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913769  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913860  374880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:16:37.913906  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.914052  374880 ssh_runner.go:195] Run: cat /version.json
	I0108 22:16:37.914081  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.916674  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917009  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917330  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.917371  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917433  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.917523  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.917545  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917622  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.917791  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.917862  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.917973  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.918026  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.918185  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.918303  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:38.009398  374880 ssh_runner.go:195] Run: systemctl --version
	I0108 22:16:38.040945  374880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:16:38.191198  374880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:16:38.198405  374880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:16:38.198504  374880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:16:38.218602  374880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:16:38.218641  374880 start.go:475] detecting cgroup driver to use...
	I0108 22:16:38.218722  374880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:16:38.234161  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:16:38.250033  374880 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:16:38.250107  374880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:16:38.266262  374880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:16:38.281553  374880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:16:38.402503  374880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:16:38.558016  374880 docker.go:219] disabling docker service ...
	I0108 22:16:38.558124  374880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:16:38.573689  374880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:16:38.589002  374880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:16:38.718943  374880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:16:38.853252  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:16:38.869464  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:16:38.890384  374880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 22:16:38.890538  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.904645  374880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:16:38.904745  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.916308  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.927747  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.938877  374880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:16:38.951536  374880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:16:38.961810  374880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:16:38.961889  374880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:16:38.976131  374880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:16:38.990253  374880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:16:39.129313  374880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:16:39.322691  374880 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:16:39.322796  374880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:16:39.329204  374880 start.go:543] Will wait 60s for crictl version
	I0108 22:16:39.329317  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:39.333991  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:16:39.381363  374880 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:16:39.381484  374880 ssh_runner.go:195] Run: crio --version
	I0108 22:16:39.435964  374880 ssh_runner.go:195] Run: crio --version
	I0108 22:16:39.499543  374880 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0108 22:16:39.501084  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:39.504205  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:39.504541  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:39.504579  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:39.504935  374880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 22:16:39.510323  374880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:39.526998  374880 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:16:39.527057  374880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:39.577709  374880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0108 22:16:39.577793  374880 ssh_runner.go:195] Run: which lz4
	I0108 22:16:39.582925  374880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:16:39.589373  374880 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:16:39.589421  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0108 22:16:37.184707  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:39.683810  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.537007  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:37.037157  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:37.061202  375556 api_server.go:72] duration metric: took 2.525037167s to wait for apiserver process to appear ...
	I0108 22:16:37.061229  375556 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:37.061250  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:37.061790  375556 api_server.go:269] stopped: https://192.168.50.18:8444/healthz: Get "https://192.168.50.18:8444/healthz": dial tcp 192.168.50.18:8444: connect: connection refused
	I0108 22:16:37.561995  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:38.852752  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:41.361118  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:42.562614  375556 api_server.go:269] stopped: https://192.168.50.18:8444/healthz: Get "https://192.168.50.18:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 22:16:42.562680  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:42.626918  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:42.626956  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:43.061435  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:43.078776  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:43.078841  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:43.561364  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:43.575304  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:43.575397  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:44.061694  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:44.072328  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:44.072394  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:44.561536  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:44.572055  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 200:
	ok
	I0108 22:16:44.586947  375556 api_server.go:141] control plane version: v1.28.4
	I0108 22:16:44.587011  375556 api_server.go:131] duration metric: took 7.52577273s to wait for apiserver health ...
	I0108 22:16:44.587029  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:16:44.587040  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:44.765569  375556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:41.520470  374880 crio.go:444] Took 1.937584 seconds to copy over tarball
	I0108 22:16:41.520541  374880 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:16:41.683864  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:44.183495  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:44.867194  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:44.881203  375556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:44.906051  375556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:44.958770  375556 system_pods.go:59] 8 kube-system pods found
	I0108 22:16:44.958813  375556 system_pods.go:61] "coredns-5dd5756b68-vcmh6" [4d87af85-075d-427c-b4ca-ba57421fc8de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:44.958823  375556 system_pods.go:61] "etcd-default-k8s-diff-port-292054" [5353bc6f-061b-414b-823b-fa224887733c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:44.958831  375556 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-292054" [aa609bfc-ba8f-4d82-bdcd-2f17e0b1b2a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:44.958838  375556 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-292054" [2500070d-a348-47a9-a1d6-525eb3ee12d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:44.958847  375556 system_pods.go:61] "kube-proxy-f4xsp" [d0987c89-c598-4ae9-a60a-bad8df066d0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:44.958867  375556 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-292054" [9b4e73b7-a4ff-469f-b03e-1170d068af2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:44.958883  375556 system_pods.go:61] "metrics-server-57f55c9bc5-6w57p" [7a85be99-ad7e-4866-a8d8-0972435dfd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:44.958899  375556 system_pods.go:61] "storage-provisioner" [4be6edbe-cb8e-4598-9d23-1cefc0afc184] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:44.958908  375556 system_pods.go:74] duration metric: took 52.82566ms to wait for pod list to return data ...
	I0108 22:16:44.958923  375556 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:44.965171  375556 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:44.965220  375556 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:44.965235  375556 node_conditions.go:105] duration metric: took 6.306299ms to run NodePressure ...
	I0108 22:16:44.965271  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:43.845812  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:45.851004  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:45.115268  374880 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.594690355s)
	I0108 22:16:45.115304  374880 crio.go:451] Took 3.594805 seconds to extract the tarball
	I0108 22:16:45.115316  374880 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:16:45.165012  374880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:45.542219  374880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0108 22:16:45.542266  374880 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:16:45.542362  374880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:45.542384  374880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.542409  374880 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 22:16:45.542451  374880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.542489  374880 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.542392  374880 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.542666  374880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.542661  374880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.543883  374880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.543921  374880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.543888  374880 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 22:16:45.543944  374880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.543888  374880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:45.543970  374880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.543895  374880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.544327  374880 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.737830  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.747956  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0108 22:16:45.780688  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.799788  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.811226  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.819948  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.857132  374880 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0108 22:16:45.857195  374880 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.857257  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.867494  374880 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0108 22:16:45.867547  374880 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0108 22:16:45.867622  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.871438  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.900657  374880 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0108 22:16:45.900706  374880 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.900755  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.986789  374880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0108 22:16:45.986850  374880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.986909  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.001283  374880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0108 22:16:46.001335  374880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:46.001389  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.009750  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0108 22:16:46.009783  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0108 22:16:46.009830  374880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0108 22:16:46.009848  374880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0108 22:16:46.009879  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:46.009887  374880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:46.009887  374880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:46.009904  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:46.009929  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.009967  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:46.009933  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.173258  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0108 22:16:46.173293  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 22:16:46.173387  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:46.173402  374880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.173451  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0108 22:16:46.173458  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0108 22:16:46.173539  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:46.173588  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0108 22:16:46.238533  374880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0108 22:16:46.238562  374880 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.238589  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0108 22:16:46.238619  374880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.238692  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0108 22:16:46.499734  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:47.197262  374880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0108 22:16:47.197344  374880 cache_images.go:92] LoadImages completed in 1.65506117s
	W0108 22:16:47.197431  374880 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0108 22:16:47.197628  374880 ssh_runner.go:195] Run: crio config
	I0108 22:16:47.273121  374880 cni.go:84] Creating CNI manager for ""
	I0108 22:16:47.273164  374880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:47.273206  374880 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:16:47.273242  374880 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-079759 NodeName:old-k8s-version-079759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 22:16:47.273439  374880 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-079759"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-079759
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.183:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:16:47.273557  374880 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-079759 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079759 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:16:47.273641  374880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 22:16:47.284374  374880 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:16:47.284528  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:16:47.295740  374880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 22:16:47.317874  374880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:16:47.339820  374880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0108 22:16:47.365063  374880 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0108 22:16:47.369942  374880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:47.387586  374880 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759 for IP: 192.168.39.183
	I0108 22:16:47.387637  374880 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:16:47.387862  374880 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:16:47.387929  374880 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:16:47.388036  374880 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.key
	I0108 22:16:47.388144  374880 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.key.a2b84326
	I0108 22:16:47.388185  374880 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.key
	I0108 22:16:47.388370  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:16:47.388426  374880 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:16:47.388449  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:16:47.388490  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:16:47.388524  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:16:47.388562  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:16:47.388629  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:47.389626  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:16:47.424129  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:16:47.455835  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:47.489732  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:47.523253  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:47.555019  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:47.587218  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:47.620629  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:47.654460  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:47.688945  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:47.722824  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:47.754016  374880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:47.773665  374880 ssh_runner.go:195] Run: openssl version
	I0108 22:16:47.779972  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:47.794327  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.801998  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.802101  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.808765  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:47.822088  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:47.836322  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.843412  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.843508  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.852467  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:47.871573  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:47.886132  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.892165  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.892250  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.898728  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:47.911118  374880 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:47.918486  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:47.928188  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:47.936324  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:47.942939  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:47.952136  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:47.962062  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:47.969861  374880 kubeadm.go:404] StartCluster: {Name:old-k8s-version-079759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-079759 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:47.969986  374880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:47.970065  374880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:48.023933  374880 cri.go:89] found id: ""
	I0108 22:16:48.024025  374880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:48.040341  374880 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:48.040377  374880 kubeadm.go:636] restartCluster start
	I0108 22:16:48.040461  374880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:48.051709  374880 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:48.053467  374880 kubeconfig.go:92] found "old-k8s-version-079759" server: "https://192.168.39.183:8443"
	I0108 22:16:48.057824  374880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:48.071248  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:48.071367  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:48.086864  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:48.572297  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:48.572426  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:48.590996  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:49.072205  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:49.072316  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:49.085908  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:49.571496  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:49.571641  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:49.587609  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:46.683555  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:48.683848  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:47.463595  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.498282893s)
	I0108 22:16:47.463651  375556 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:47.494376  375556 kubeadm.go:787] kubelet initialised
	I0108 22:16:47.494409  375556 kubeadm.go:788] duration metric: took 30.746268ms waiting for restarted kubelet to initialise ...
	I0108 22:16:47.494419  375556 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:47.518711  375556 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:49.532387  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:47.854322  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:50.347325  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:52.349479  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:50.071318  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:50.071492  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:50.087514  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:50.572137  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:50.572248  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:50.586581  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.072060  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:51.072182  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:51.087008  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.571464  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:51.571586  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:51.585684  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:52.072246  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:52.072323  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:52.087689  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:52.572243  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:52.572347  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:52.587037  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:53.071470  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:53.071589  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:53.086911  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:53.571460  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:53.571553  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:53.586045  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:54.072236  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:54.072358  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:54.087701  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:54.572312  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:54.572446  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:54.587922  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.181229  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:53.182527  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:52.026615  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:54.027979  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:54.849162  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:57.346988  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:55.071292  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:55.071441  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:55.090623  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:55.572144  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:55.572231  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:55.587405  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:56.071926  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:56.072056  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:56.086264  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:56.571790  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:56.571930  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:56.586088  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:57.071438  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:57.071546  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:57.087310  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:57.571491  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:57.571640  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:57.585754  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:58.071604  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:58.071723  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:58.087027  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:58.087070  374880 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:58.087086  374880 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:58.087128  374880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:58.087206  374880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:58.137792  374880 cri.go:89] found id: ""
	I0108 22:16:58.137875  374880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:58.157140  374880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:58.171953  374880 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:58.172029  374880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:58.186287  374880 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:58.186325  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:58.316514  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.124691  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.386136  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.490503  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.609542  374880 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:59.609648  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:55.684783  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:58.189882  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:56.527144  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:58.529935  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:01.030202  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:59.350073  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:01.845861  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:00.109804  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:00.610728  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.110191  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.609754  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.638919  374880 api_server.go:72] duration metric: took 2.029378055s to wait for apiserver process to appear ...
	I0108 22:17:01.638952  374880 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:17:01.638975  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:00.681951  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:02.683028  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:04.685040  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:03.527242  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:05.527888  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:03.850211  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:06.350594  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:06.639278  374880 api_server.go:269] stopped: https://192.168.39.183:8443/healthz: Get "https://192.168.39.183:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 22:17:06.639347  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.110234  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:17:08.110269  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:17:08.110287  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.268403  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.268437  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:08.268451  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.300726  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.300787  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:08.639135  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.676558  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.676598  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:09.139592  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:09.151081  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:09.151120  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:09.639741  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:09.646812  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0108 22:17:09.656279  374880 api_server.go:141] control plane version: v1.16.0
	I0108 22:17:09.656318  374880 api_server.go:131] duration metric: took 8.017357804s to wait for apiserver health ...
	I0108 22:17:09.656333  374880 cni.go:84] Creating CNI manager for ""
	I0108 22:17:09.656342  374880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:17:09.658633  374880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:17:09.660081  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:17:09.670922  374880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:17:09.697148  374880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:17:09.710916  374880 system_pods.go:59] 7 kube-system pods found
	I0108 22:17:09.710958  374880 system_pods.go:61] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:09.710966  374880 system_pods.go:61] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:09.710974  374880 system_pods.go:61] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:09.710982  374880 system_pods.go:61] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Pending
	I0108 22:17:09.710988  374880 system_pods.go:61] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:09.710994  374880 system_pods.go:61] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:09.710999  374880 system_pods.go:61] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:09.711007  374880 system_pods.go:74] duration metric: took 13.819282ms to wait for pod list to return data ...
	I0108 22:17:09.711017  374880 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:17:09.717809  374880 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:17:09.717862  374880 node_conditions.go:123] node cpu capacity is 2
	I0108 22:17:09.717882  374880 node_conditions.go:105] duration metric: took 6.857808ms to run NodePressure ...
	I0108 22:17:09.717921  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:17:07.181980  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:09.182492  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:10.147851  374880 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:17:10.155593  374880 kubeadm.go:787] kubelet initialised
	I0108 22:17:10.155627  374880 kubeadm.go:788] duration metric: took 7.730921ms waiting for restarted kubelet to initialise ...
	I0108 22:17:10.155636  374880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:10.162330  374880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.173343  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.173384  374880 pod_ready.go:81] duration metric: took 11.015314ms waiting for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.173398  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.173408  374880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.181308  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "etcd-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.181354  374880 pod_ready.go:81] duration metric: took 7.925248ms waiting for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.181370  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "etcd-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.181382  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.201297  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.201340  374880 pod_ready.go:81] duration metric: took 19.943972ms waiting for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.201355  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.201364  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.212246  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.212303  374880 pod_ready.go:81] duration metric: took 10.921798ms waiting for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.212326  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.212337  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.554958  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-proxy-mfs65" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.554990  374880 pod_ready.go:81] duration metric: took 342.644311ms waiting for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.555000  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-proxy-mfs65" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.555014  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.952644  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.952690  374880 pod_ready.go:81] duration metric: took 397.663927ms waiting for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.952705  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.952721  374880 pod_ready.go:38] duration metric: took 797.073923ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:10.952756  374880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:17:10.966105  374880 ops.go:34] apiserver oom_adj: -16
	I0108 22:17:10.966142  374880 kubeadm.go:640] restartCluster took 22.925755113s
	I0108 22:17:10.966160  374880 kubeadm.go:406] StartCluster complete in 22.996305207s
	I0108 22:17:10.966183  374880 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:17:10.966269  374880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:17:10.968639  374880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:17:10.968991  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:17:10.969141  374880 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:17:10.969252  374880 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969268  374880 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969273  374880 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:17:10.969292  374880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-079759"
	I0108 22:17:10.969296  374880 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-079759"
	W0108 22:17:10.969314  374880 addons.go:246] addon metrics-server should already be in state true
	I0108 22:17:10.969351  374880 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969368  374880 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-079759"
	W0108 22:17:10.969375  374880 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:17:10.969393  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.969409  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.969785  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969823  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969832  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.969824  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969916  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.969926  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.990948  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0108 22:17:10.991126  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0108 22:17:10.991782  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:10.991979  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:10.992429  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:10.992473  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:10.992593  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:10.992618  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:10.992993  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:10.993076  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:10.993348  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:10.993741  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.993822  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.997882  374880 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-079759"
	W0108 22:17:10.997908  374880 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:17:10.997937  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.998375  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.998422  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.014704  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0108 22:17:11.015259  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.015412  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0108 22:17:11.016128  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.016160  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.016532  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.017165  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:11.017214  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.017521  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.018124  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.018140  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.018560  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.018854  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.018926  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0108 22:17:11.019671  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.020333  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.020353  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.020686  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.021353  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:11.021406  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.021696  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.024514  374880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:17:11.026172  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:17:11.026202  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:17:11.026238  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.031029  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.031951  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.031979  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.032327  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.032560  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.032709  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.032862  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.039130  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0108 22:17:11.039792  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.040408  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.040426  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.040821  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.041071  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.041764  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45497
	I0108 22:17:11.042444  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.042927  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.042952  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.043292  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.043498  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.043832  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.046099  374880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:17:07.529123  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:09.529950  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:11.048145  374880 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:17:11.048189  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:17:11.048231  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.045325  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.048952  374880 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:17:11.048976  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:17:11.049021  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.052466  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.052852  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.052891  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.053248  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.053542  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.053781  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.053964  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.062218  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.062324  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.062338  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.062363  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.063474  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.063729  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.063926  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.190657  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:17:11.190690  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:17:11.221757  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:17:11.254133  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:17:11.285976  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:17:11.286005  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:17:11.365594  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:17:11.365632  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:17:11.406494  374880 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 22:17:11.459160  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:17:11.475488  374880 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-079759" context rescaled to 1 replicas
	I0108 22:17:11.475557  374880 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:17:11.478952  374880 out.go:177] * Verifying Kubernetes components...
	I0108 22:17:11.480674  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:17:12.238037  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016231756s)
	I0108 22:17:12.238158  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.238178  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.238585  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.238616  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.238630  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.238640  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.238649  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.238928  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.238953  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.292897  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.292926  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.293228  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.293249  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.297621  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.043443256s)
	I0108 22:17:12.297697  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.297717  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.298050  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.298107  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.298121  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.298136  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.298151  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.298377  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.298434  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.298449  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.460391  374880 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-079759" to be "Ready" ...
	I0108 22:17:12.460519  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.001301389s)
	I0108 22:17:12.460578  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.460600  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.460930  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.460950  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.460970  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.460980  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.461238  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.461262  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.461278  374880 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-079759"
	I0108 22:17:12.461289  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.464523  374880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0108 22:17:08.848369  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:11.349358  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:12.466030  374880 addons.go:508] enable addons completed in 1.496887794s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0108 22:17:14.465035  374880 node_ready.go:58] node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:11.186335  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:13.680427  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:12.029896  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:14.527011  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:13.847034  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:16.348875  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:16.465852  374880 node_ready.go:58] node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:18.965439  374880 node_ready.go:49] node "old-k8s-version-079759" has status "Ready":"True"
	I0108 22:17:18.965487  374880 node_ready.go:38] duration metric: took 6.505055778s waiting for node "old-k8s-version-079759" to be "Ready" ...
	I0108 22:17:18.965512  374880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:18.972414  374880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.981201  374880 pod_ready.go:92] pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.981242  374880 pod_ready.go:81] duration metric: took 8.788084ms waiting for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.981258  374880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.987118  374880 pod_ready.go:92] pod "etcd-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.987147  374880 pod_ready.go:81] duration metric: took 5.880499ms waiting for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.987165  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.995928  374880 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.995972  374880 pod_ready.go:81] duration metric: took 8.795387ms waiting for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.995990  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.006241  374880 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.006273  374880 pod_ready.go:81] duration metric: took 10.274527ms waiting for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.006288  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.366551  374880 pod_ready.go:92] pod "kube-proxy-mfs65" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.366588  374880 pod_ready.go:81] duration metric: took 360.29132ms waiting for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.366607  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.766225  374880 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.766266  374880 pod_ready.go:81] duration metric: took 399.648483ms waiting for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.766287  374880 pod_ready.go:38] duration metric: took 800.758248ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:19.766317  374880 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:17:19.766407  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:19.790384  374880 api_server.go:72] duration metric: took 8.314784167s to wait for apiserver process to appear ...
	I0108 22:17:19.790417  374880 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:17:19.790442  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:15.682742  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:18.181808  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:19.813424  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0108 22:17:19.814615  374880 api_server.go:141] control plane version: v1.16.0
	I0108 22:17:19.814638  374880 api_server.go:131] duration metric: took 24.214441ms to wait for apiserver health ...
	I0108 22:17:19.814647  374880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:17:19.967792  374880 system_pods.go:59] 7 kube-system pods found
	I0108 22:17:19.967850  374880 system_pods.go:61] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:19.967858  374880 system_pods.go:61] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:19.967865  374880 system_pods.go:61] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:19.967871  374880 system_pods.go:61] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Running
	I0108 22:17:19.967875  374880 system_pods.go:61] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:19.967882  374880 system_pods.go:61] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:19.967896  374880 system_pods.go:61] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:19.967908  374880 system_pods.go:74] duration metric: took 153.252828ms to wait for pod list to return data ...
	I0108 22:17:19.967925  374880 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:17:20.166954  374880 default_sa.go:45] found service account: "default"
	I0108 22:17:20.166999  374880 default_sa.go:55] duration metric: took 199.059234ms for default service account to be created ...
	I0108 22:17:20.167013  374880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:17:20.367805  374880 system_pods.go:86] 7 kube-system pods found
	I0108 22:17:20.367843  374880 system_pods.go:89] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:20.367851  374880 system_pods.go:89] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:20.367878  374880 system_pods.go:89] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:20.367889  374880 system_pods.go:89] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Running
	I0108 22:17:20.367895  374880 system_pods.go:89] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:20.367901  374880 system_pods.go:89] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:20.367908  374880 system_pods.go:89] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:20.367917  374880 system_pods.go:126] duration metric: took 200.897828ms to wait for k8s-apps to be running ...
	I0108 22:17:20.367931  374880 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:17:20.368002  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:17:20.384736  374880 system_svc.go:56] duration metric: took 16.789711ms WaitForService to wait for kubelet.
	I0108 22:17:20.384777  374880 kubeadm.go:581] duration metric: took 8.909185454s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:17:20.384805  374880 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:17:20.566662  374880 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:17:20.566699  374880 node_conditions.go:123] node cpu capacity is 2
	I0108 22:17:20.566713  374880 node_conditions.go:105] duration metric: took 181.900804ms to run NodePressure ...
	I0108 22:17:20.566733  374880 start.go:228] waiting for startup goroutines ...
	I0108 22:17:20.566743  374880 start.go:233] waiting for cluster config update ...
	I0108 22:17:20.566758  374880 start.go:242] writing updated cluster config ...
	I0108 22:17:20.567148  374880 ssh_runner.go:195] Run: rm -f paused
	I0108 22:17:20.625096  374880 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0108 22:17:20.627497  374880 out.go:177] 
	W0108 22:17:20.629694  374880 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0108 22:17:20.631265  374880 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0108 22:17:20.632916  374880 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-079759" cluster and "default" namespace by default
	I0108 22:17:16.529078  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:19.030929  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:18.848535  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:20.848603  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:20.182275  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:22.183490  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:24.682561  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:21.528256  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:23.529114  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:26.027560  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:23.346430  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:25.348995  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.182420  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:29.183480  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.530319  375556 pod_ready.go:92] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.530347  375556 pod_ready.go:81] duration metric: took 40.011595743s waiting for pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.530357  375556 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.537548  375556 pod_ready.go:92] pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.537577  375556 pod_ready.go:81] duration metric: took 7.212322ms waiting for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.537588  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.549788  375556 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.549830  375556 pod_ready.go:81] duration metric: took 12.233749ms waiting for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.549845  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.558337  375556 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.558364  375556 pod_ready.go:81] duration metric: took 8.510648ms waiting for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.558375  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4xsp" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.568980  375556 pod_ready.go:92] pod "kube-proxy-f4xsp" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.569008  375556 pod_ready.go:81] duration metric: took 10.626925ms waiting for pod "kube-proxy-f4xsp" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.569018  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.924746  375556 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.924792  375556 pod_ready.go:81] duration metric: took 355.765575ms waiting for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.924810  375556 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:29.934031  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.846645  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:29.848666  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:32.347317  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:31.681795  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.183509  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:31.935866  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.434680  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.850409  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:37.348417  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:36.681720  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:39.187220  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:36.933398  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:38.937527  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:39.849140  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:42.348407  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:41.681963  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:44.183281  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:41.434499  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:43.438745  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:45.934532  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:44.846802  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:46.847285  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:46.683139  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:49.180610  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:47.942228  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:50.434779  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:49.346290  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:51.346592  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:51.181365  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:53.182147  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:52.435305  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:54.933017  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:53.347169  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:55.847921  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:55.680794  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:57.683942  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:59.684807  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:56.933676  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:59.433266  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:58.346863  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:00.351598  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:02.358340  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:02.183383  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:04.684356  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:01.438892  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:03.942882  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:04.845380  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:06.850561  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:07.182060  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:09.182524  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:06.433230  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:08.435570  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:10.933834  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:08.853139  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:11.345311  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:11.183083  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.185196  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.435974  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.934920  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.347243  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.350752  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.683154  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:18.183396  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:17.938857  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.434388  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:17.849663  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.349073  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.349854  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.183740  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.681755  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.938829  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:24.940050  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:24.845935  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:26.848602  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:25.182926  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:27.681471  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:27.433983  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:29.933179  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:29.348482  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:31.848768  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:30.182593  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:32.184633  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:34.684351  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:31.935920  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:34.432407  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:33.849853  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:36.347248  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:37.185296  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:39.683266  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:36.434742  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:38.935788  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:38.347422  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:40.847846  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:42.184271  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:44.191899  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:41.434194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:43.435816  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:45.436582  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:43.348144  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:45.850291  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:46.681976  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:48.684379  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:47.934501  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:50.432989  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:48.346408  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:50.348943  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:51.181865  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:53.182990  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:52.433070  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:54.442432  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:52.846607  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:54.850642  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:57.347230  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:55.681392  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:57.683410  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:56.932551  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:58.935585  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:59.348127  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:01.848981  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:00.183662  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:02.681392  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:04.683283  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:01.433125  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:03.433714  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:05.434985  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:03.849460  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:06.349541  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:07.182372  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:09.681196  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:07.935969  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:10.435837  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:08.847292  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:10.850261  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:11.681770  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:13.683390  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:12.439563  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:14.933378  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:13.347217  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:15.847524  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:16.181226  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:18.182271  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:16.936400  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:19.433956  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:18.347048  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:20.846947  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:20.182396  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:22.681453  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:24.682678  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:21.934747  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:23.935826  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:22.847819  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:24.847981  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:27.346372  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:27.181829  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:29.686277  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:26.433266  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:28.433601  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:30.435331  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:29.349171  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:31.848107  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:31.686784  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.181838  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:32.932383  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.933487  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.349446  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:36.845807  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:36.182711  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:38.183592  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:37.433841  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:39.440368  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:38.847000  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:40.849528  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:40.681394  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:42.681803  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:41.934279  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:44.433480  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:43.346283  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:45.849805  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:45.182604  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:47.183086  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:49.681891  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:46.934165  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:49.433592  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:48.346422  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:50.346711  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:52.347386  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:52.181241  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:54.184167  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:51.435757  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:53.932937  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:55.935076  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:54.847306  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:56.849761  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:56.681736  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:59.182156  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:58.433892  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:00.435066  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:59.348176  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:01.847094  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:01.682869  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.183165  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:02.934032  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.935393  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.347516  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:06.846388  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:06.681333  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:08.684291  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:07.436354  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:09.934776  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:08.849876  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.346794  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.184760  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.681471  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.935382  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.935718  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.347573  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:15.846434  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:15.684425  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:18.182489  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:16.435556  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:18.934238  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:17.847804  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:19.851620  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:22.347305  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:20.183538  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:21.174145  375205 pod_ready.go:81] duration metric: took 4m0.001134505s waiting for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" ...
	E0108 22:20:21.174196  375205 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:20:21.174225  375205 pod_ready.go:38] duration metric: took 4m11.09670924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:20:21.174739  375205 kubeadm.go:640] restartCluster took 4m32.919154523s
	W0108 22:20:21.174932  375205 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:20:21.175031  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:20:21.437480  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:23.437985  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:25.934631  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:24.847918  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:27.354150  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:28.434309  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:30.935564  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:29.845550  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:31.847597  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:32.338942  375293 pod_ready.go:81] duration metric: took 4m0.001163118s waiting for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" ...
	E0108 22:20:32.338972  375293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:20:32.338994  375293 pod_ready.go:38] duration metric: took 4m8.522193777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:20:32.339022  375293 kubeadm.go:640] restartCluster took 4m31.730992352s
	W0108 22:20:32.339087  375293 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:20:32.339116  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:20:32.935958  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:35.434816  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:36.302806  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.127706719s)
	I0108 22:20:36.302938  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:20:36.321621  375205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:20:36.334281  375205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:20:36.346671  375205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:20:36.346717  375205 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:20:36.614321  375205 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:20:37.936328  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:40.435692  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:42.933586  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:45.434194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:48.562754  375205 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0108 22:20:48.562854  375205 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:20:48.562933  375205 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:20:48.563069  375205 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:20:48.563228  375205 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:20:48.563339  375205 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:20:48.565241  375205 out.go:204]   - Generating certificates and keys ...
	I0108 22:20:48.565369  375205 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:20:48.565449  375205 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:20:48.565542  375205 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:20:48.565610  375205 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:20:48.565733  375205 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:20:48.565840  375205 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:20:48.565938  375205 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:20:48.566036  375205 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:20:48.566148  375205 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:20:48.566255  375205 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:20:48.566336  375205 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:20:48.566437  375205 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:20:48.566521  375205 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:20:48.566606  375205 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0108 22:20:48.566682  375205 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:20:48.566771  375205 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:20:48.566859  375205 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:20:48.566957  375205 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:20:48.567046  375205 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:20:48.569013  375205 out.go:204]   - Booting up control plane ...
	I0108 22:20:48.569130  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:20:48.569247  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:20:48.569353  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:20:48.569468  375205 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:20:48.569588  375205 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:20:48.569656  375205 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:20:48.569873  375205 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:20:48.569977  375205 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002900 seconds
	I0108 22:20:48.570115  375205 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:20:48.570289  375205 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:20:48.570372  375205 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:20:48.570558  375205 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-675668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:20:48.570648  375205 kubeadm.go:322] [bootstrap-token] Using token: t5purj.kqjcf0swk5rb5mxk
	I0108 22:20:48.572249  375205 out.go:204]   - Configuring RBAC rules ...
	I0108 22:20:48.572407  375205 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:20:48.572525  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:20:48.572698  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:20:48.572845  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:20:48.572985  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:20:48.573060  375205 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:20:48.573192  375205 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:20:48.573253  375205 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:20:48.573309  375205 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:20:48.573316  375205 kubeadm.go:322] 
	I0108 22:20:48.573365  375205 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:20:48.573372  375205 kubeadm.go:322] 
	I0108 22:20:48.573433  375205 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:20:48.573440  375205 kubeadm.go:322] 
	I0108 22:20:48.573466  375205 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:20:48.573516  375205 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:20:48.573559  375205 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:20:48.573565  375205 kubeadm.go:322] 
	I0108 22:20:48.573608  375205 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:20:48.573614  375205 kubeadm.go:322] 
	I0108 22:20:48.573656  375205 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:20:48.573663  375205 kubeadm.go:322] 
	I0108 22:20:48.573705  375205 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:20:48.573774  375205 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:20:48.573830  375205 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:20:48.573836  375205 kubeadm.go:322] 
	I0108 22:20:48.573902  375205 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:20:48.573968  375205 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:20:48.573974  375205 kubeadm.go:322] 
	I0108 22:20:48.574041  375205 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t5purj.kqjcf0swk5rb5mxk \
	I0108 22:20:48.574137  375205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:20:48.574168  375205 kubeadm.go:322] 	--control-plane 
	I0108 22:20:48.574179  375205 kubeadm.go:322] 
	I0108 22:20:48.574277  375205 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:20:48.574288  375205 kubeadm.go:322] 
	I0108 22:20:48.574369  375205 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t5purj.kqjcf0swk5rb5mxk \
	I0108 22:20:48.574510  375205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:20:48.574532  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:20:48.574545  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:20:48.576776  375205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:20:48.578238  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:20:48.605767  375205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:20:48.656602  375205 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:20:48.656700  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=no-preload-675668 minikube.k8s.io/updated_at=2024_01_08T22_20_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:48.656701  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:48.954525  375205 ops.go:34] apiserver oom_adj: -16
	I0108 22:20:48.954705  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:49.454907  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.014263  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (17.675119667s)
	I0108 22:20:50.014357  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:20:50.032616  375293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:20:50.046779  375293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:20:50.059243  375293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:20:50.059321  375293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:20:50.125341  375293 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:20:50.125427  375293 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:20:50.314274  375293 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:20:50.314692  375293 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:20:50.314859  375293 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:20:50.613241  375293 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:20:47.934671  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:50.435675  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:50.615123  375293 out.go:204]   - Generating certificates and keys ...
	I0108 22:20:50.615298  375293 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:20:50.615442  375293 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:20:50.615588  375293 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:20:50.615684  375293 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:20:50.615978  375293 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:20:50.616644  375293 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:20:50.617070  375293 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:20:50.617625  375293 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:20:50.618175  375293 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:20:50.618746  375293 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:20:50.619222  375293 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:20:50.619315  375293 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:20:50.750595  375293 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:20:50.925827  375293 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:20:51.210091  375293 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:20:51.341979  375293 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:20:51.342383  375293 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:20:51.346252  375293 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:20:51.348515  375293 out.go:204]   - Booting up control plane ...
	I0108 22:20:51.348656  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:20:51.349029  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:20:51.350374  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:20:51.368778  375293 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:20:51.370050  375293 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:20:51.370127  375293 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:20:51.533956  375293 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:20:49.955240  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.455461  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.954656  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:51.455494  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:51.954708  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.454966  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.955643  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:53.454696  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:53.955234  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:54.455436  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.934792  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:55.433713  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:54.955090  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:55.454594  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:55.954634  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:56.455479  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:56.954866  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.455465  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.954857  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:58.454611  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:58.955416  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:59.455690  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.434365  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:59.932616  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:01.038928  375293 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503619 seconds
	I0108 22:21:01.039086  375293 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:21:01.066204  375293 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:21:01.633859  375293 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:21:01.634073  375293 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-903819 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:21:02.161422  375293 kubeadm.go:322] [bootstrap-token] Using token: m5gf05.lf63ehk148mqhzsy
	I0108 22:20:59.954870  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:00.455632  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:00.954611  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:01.455512  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:01.955058  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.130771  375205 kubeadm.go:1088] duration metric: took 13.474145806s to wait for elevateKubeSystemPrivileges.
	I0108 22:21:02.130812  375205 kubeadm.go:406] StartCluster complete in 5m13.930335887s
	I0108 22:21:02.130872  375205 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:02.131052  375205 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:21:02.133316  375205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:02.133620  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:21:02.133769  375205 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:21:02.133851  375205 addons.go:69] Setting storage-provisioner=true in profile "no-preload-675668"
	I0108 22:21:02.133874  375205 addons.go:237] Setting addon storage-provisioner=true in "no-preload-675668"
	W0108 22:21:02.133885  375205 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:21:02.133902  375205 addons.go:69] Setting default-storageclass=true in profile "no-preload-675668"
	I0108 22:21:02.133931  375205 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-675668"
	I0108 22:21:02.133944  375205 addons.go:69] Setting metrics-server=true in profile "no-preload-675668"
	I0108 22:21:02.133960  375205 addons.go:237] Setting addon metrics-server=true in "no-preload-675668"
	W0108 22:21:02.133970  375205 addons.go:246] addon metrics-server should already be in state true
	I0108 22:21:02.134007  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.133934  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.134493  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134492  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134531  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.133882  375205 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:21:02.134595  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134626  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.134679  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.159537  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0108 22:21:02.159560  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0108 22:21:02.159658  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0108 22:21:02.160218  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160310  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160353  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160816  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160832  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.160837  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160856  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.160923  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160934  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.161384  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161384  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161436  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161578  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.162110  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.162156  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.163070  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.163111  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.166373  375205 addons.go:237] Setting addon default-storageclass=true in "no-preload-675668"
	W0108 22:21:02.166398  375205 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:21:02.166437  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.166793  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.166851  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.186248  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0108 22:21:02.186805  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.187689  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.187721  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.189657  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.189934  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0108 22:21:02.190139  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.190885  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.192512  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.192561  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.192883  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.193058  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.193793  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.193846  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.194831  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0108 22:21:02.197130  375205 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:21:02.195453  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.198890  375205 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:02.198908  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:21:02.198928  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.199474  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.199496  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.202159  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.202458  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.204081  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.204440  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.204470  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.204907  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.205095  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.206369  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.206382  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.208865  375205 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:21:02.207548  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.210754  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:21:02.210777  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:21:02.210806  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.215494  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.216525  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.216572  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.217020  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.217270  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.217433  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.217548  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.218155  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0108 22:21:02.219031  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.219589  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.219613  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.220024  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.220222  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.223150  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.223618  375205 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:02.223638  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:21:02.223662  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.227537  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.228321  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.228364  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.228729  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.228986  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.229244  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.229385  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.376102  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:02.442186  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:21:02.442220  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:21:02.463490  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:02.511966  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:21:02.512007  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:21:02.516771  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:21:02.645916  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:02.645958  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:21:02.693299  375205 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-675668" context rescaled to 1 replicas
	I0108 22:21:02.693524  375205 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.153 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:21:02.696133  375205 out.go:177] * Verifying Kubernetes components...
	I0108 22:21:02.163532  375293 out.go:204]   - Configuring RBAC rules ...
	I0108 22:21:02.163667  375293 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:21:02.202175  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:21:02.230273  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:21:02.239237  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:21:02.245892  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:21:02.262139  375293 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:21:02.282319  375293 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:21:02.634155  375293 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:21:02.712856  375293 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:21:02.712895  375293 kubeadm.go:322] 
	I0108 22:21:02.713004  375293 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:21:02.713029  375293 kubeadm.go:322] 
	I0108 22:21:02.713122  375293 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:21:02.713138  375293 kubeadm.go:322] 
	I0108 22:21:02.713175  375293 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:21:02.713243  375293 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:21:02.713342  375293 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:21:02.713367  375293 kubeadm.go:322] 
	I0108 22:21:02.713461  375293 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:21:02.713491  375293 kubeadm.go:322] 
	I0108 22:21:02.713571  375293 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:21:02.713582  375293 kubeadm.go:322] 
	I0108 22:21:02.713672  375293 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:21:02.713775  375293 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:21:02.713903  375293 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:21:02.713916  375293 kubeadm.go:322] 
	I0108 22:21:02.714019  375293 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:21:02.714118  375293 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:21:02.714132  375293 kubeadm.go:322] 
	I0108 22:21:02.714275  375293 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m5gf05.lf63ehk148mqhzsy \
	I0108 22:21:02.714404  375293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:21:02.714427  375293 kubeadm.go:322] 	--control-plane 
	I0108 22:21:02.714439  375293 kubeadm.go:322] 
	I0108 22:21:02.714524  375293 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:21:02.714533  375293 kubeadm.go:322] 
	I0108 22:21:02.714623  375293 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m5gf05.lf63ehk148mqhzsy \
	I0108 22:21:02.714748  375293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:21:02.715538  375293 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:21:02.715812  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:21:02.715830  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:21:02.717948  375293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:21:02.719376  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:21:02.757728  375293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:21:02.792630  375293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:21:02.792734  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.792736  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=embed-certs-903819 minikube.k8s.io/updated_at=2024_01_08T22_21_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.697938  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:02.989011  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:03.814186  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437994456s)
	I0108 22:21:03.814254  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814255  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.350714909s)
	I0108 22:21:03.814286  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.297474579s)
	I0108 22:21:03.814302  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814321  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814317  375205 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0108 22:21:03.814318  375205 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.116341471s)
	I0108 22:21:03.814267  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814391  375205 node_ready.go:35] waiting up to 6m0s for node "no-preload-675668" to be "Ready" ...
	I0108 22:21:03.814667  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.814692  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.814734  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.814742  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.814765  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814789  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814821  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.814855  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.814868  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814878  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814994  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.815008  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.816606  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.816639  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.816649  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.844508  375205 node_ready.go:49] node "no-preload-675668" has status "Ready":"True"
	I0108 22:21:03.844562  375205 node_ready.go:38] duration metric: took 30.150881ms waiting for node "no-preload-675668" to be "Ready" ...
	I0108 22:21:03.844582  375205 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:03.895674  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.895707  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.896169  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.896196  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.896243  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.916148  375205 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-q6x86" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:04.208779  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.219716131s)
	I0108 22:21:04.208834  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:04.208853  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:04.209240  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:04.209262  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:04.209275  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:04.209289  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:04.209564  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:04.209585  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:04.209599  375205 addons.go:473] Verifying addon metrics-server=true in "no-preload-675668"
	I0108 22:21:04.211402  375205 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 22:21:04.212659  375205 addons.go:508] enable addons completed in 2.078891102s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 22:21:01.934579  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:03.936076  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:05.936317  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:03.317224  375293 ops.go:34] apiserver oom_adj: -16
	I0108 22:21:03.317384  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:03.817786  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:04.318579  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:04.817664  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.317487  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.818475  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:06.318507  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:06.818090  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:07.318335  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.932344  375205 pod_ready.go:92] pod "coredns-76f75df574-q6x86" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.932389  375205 pod_ready.go:81] duration metric: took 2.016206796s waiting for pod "coredns-76f75df574-q6x86" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.932404  375205 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.941282  375205 pod_ready.go:92] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.941316  375205 pod_ready.go:81] duration metric: took 8.903771ms waiting for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.941331  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.950226  375205 pod_ready.go:92] pod "kube-apiserver-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.950258  375205 pod_ready.go:81] duration metric: took 8.918375ms waiting for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.950273  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.972742  375205 pod_ready.go:92] pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.972794  375205 pod_ready.go:81] duration metric: took 22.511438ms waiting for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.972816  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b2nx2" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:06.981190  375205 pod_ready.go:92] pod "kube-proxy-b2nx2" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:06.981214  375205 pod_ready.go:81] duration metric: took 1.008391493s waiting for pod "kube-proxy-b2nx2" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:06.981225  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:07.121313  375205 pod_ready.go:92] pod "kube-scheduler-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:07.121348  375205 pod_ready.go:81] duration metric: took 140.114425ms waiting for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:07.121363  375205 pod_ready.go:38] duration metric: took 3.276764424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:07.121385  375205 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:21:07.121458  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:21:07.138313  375205 api_server.go:72] duration metric: took 4.444721115s to wait for apiserver process to appear ...
	I0108 22:21:07.138352  375205 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:21:07.138384  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:21:07.145653  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 200:
	ok
	I0108 22:21:07.148112  375205 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:21:07.148146  375205 api_server.go:131] duration metric: took 9.785033ms to wait for apiserver health ...
	I0108 22:21:07.148158  375205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:21:07.325218  375205 system_pods.go:59] 8 kube-system pods found
	I0108 22:21:07.325263  375205 system_pods.go:61] "coredns-76f75df574-q6x86" [6cad2e0f-a7af-453d-9eaf-55b56e41e27b] Running
	I0108 22:21:07.325268  375205 system_pods.go:61] "etcd-no-preload-675668" [cd434699-162a-4b04-853d-94dbb1254279] Running
	I0108 22:21:07.325273  375205 system_pods.go:61] "kube-apiserver-no-preload-675668" [d22859b8-f451-40b8-85d7-7f3d548b1af1] Running
	I0108 22:21:07.325279  375205 system_pods.go:61] "kube-controller-manager-no-preload-675668" [8b52fdfe-124a-4d08-b66b-41f1b051fe95] Running
	I0108 22:21:07.325283  375205 system_pods.go:61] "kube-proxy-b2nx2" [b6106f11-9345-4915-b7cc-d2671a7c4e72] Running
	I0108 22:21:07.325287  375205 system_pods.go:61] "kube-scheduler-no-preload-675668" [83562817-27bf-4265-88f0-3dad667687c5] Running
	I0108 22:21:07.325296  375205 system_pods.go:61] "metrics-server-57f55c9bc5-vb2kj" [45489720-2506-46fa-8833-02cbae6f122b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:21:07.325305  375205 system_pods.go:61] "storage-provisioner" [a1c64608-a169-455b-a5e9-0ecb4161432c] Running
	I0108 22:21:07.325323  375205 system_pods.go:74] duration metric: took 177.156331ms to wait for pod list to return data ...
	I0108 22:21:07.325337  375205 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:21:07.521751  375205 default_sa.go:45] found service account: "default"
	I0108 22:21:07.521796  375205 default_sa.go:55] duration metric: took 196.444982ms for default service account to be created ...
	I0108 22:21:07.521809  375205 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:21:07.725848  375205 system_pods.go:86] 8 kube-system pods found
	I0108 22:21:07.725888  375205 system_pods.go:89] "coredns-76f75df574-q6x86" [6cad2e0f-a7af-453d-9eaf-55b56e41e27b] Running
	I0108 22:21:07.725894  375205 system_pods.go:89] "etcd-no-preload-675668" [cd434699-162a-4b04-853d-94dbb1254279] Running
	I0108 22:21:07.725899  375205 system_pods.go:89] "kube-apiserver-no-preload-675668" [d22859b8-f451-40b8-85d7-7f3d548b1af1] Running
	I0108 22:21:07.725904  375205 system_pods.go:89] "kube-controller-manager-no-preload-675668" [8b52fdfe-124a-4d08-b66b-41f1b051fe95] Running
	I0108 22:21:07.725908  375205 system_pods.go:89] "kube-proxy-b2nx2" [b6106f11-9345-4915-b7cc-d2671a7c4e72] Running
	I0108 22:21:07.725913  375205 system_pods.go:89] "kube-scheduler-no-preload-675668" [83562817-27bf-4265-88f0-3dad667687c5] Running
	I0108 22:21:07.725920  375205 system_pods.go:89] "metrics-server-57f55c9bc5-vb2kj" [45489720-2506-46fa-8833-02cbae6f122b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:21:07.725926  375205 system_pods.go:89] "storage-provisioner" [a1c64608-a169-455b-a5e9-0ecb4161432c] Running
	I0108 22:21:07.725937  375205 system_pods.go:126] duration metric: took 204.121913ms to wait for k8s-apps to be running ...
	I0108 22:21:07.725946  375205 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:21:07.726014  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:07.745719  375205 system_svc.go:56] duration metric: took 19.7558ms WaitForService to wait for kubelet.
	I0108 22:21:07.745762  375205 kubeadm.go:581] duration metric: took 5.052181219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:21:07.745787  375205 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:21:07.923051  375205 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:21:07.923108  375205 node_conditions.go:123] node cpu capacity is 2
	I0108 22:21:07.923124  375205 node_conditions.go:105] duration metric: took 177.330669ms to run NodePressure ...
	I0108 22:21:07.923140  375205 start.go:228] waiting for startup goroutines ...
	I0108 22:21:07.923150  375205 start.go:233] waiting for cluster config update ...
	I0108 22:21:07.923164  375205 start.go:242] writing updated cluster config ...
	I0108 22:21:07.923585  375205 ssh_runner.go:195] Run: rm -f paused
	I0108 22:21:07.985436  375205 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0108 22:21:07.987522  375205 out.go:177] * Done! kubectl is now configured to use "no-preload-675668" cluster and "default" namespace by default
	I0108 22:21:07.936490  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:10.434333  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:07.817734  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:08.318472  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:08.818320  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:09.317791  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:09.818298  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:10.317739  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:10.818233  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:11.317545  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:11.818344  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:12.317620  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:12.817911  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:13.317976  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:13.817670  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:14.317747  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:14.817596  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:15.318339  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:15.465438  375293 kubeadm.go:1088] duration metric: took 12.672788245s to wait for elevateKubeSystemPrivileges.
	I0108 22:21:15.465476  375293 kubeadm.go:406] StartCluster complete in 5m14.917822837s
	I0108 22:21:15.465503  375293 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:15.465612  375293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:21:15.468437  375293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:15.468772  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:21:15.468921  375293 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:21:15.469008  375293 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-903819"
	I0108 22:21:15.469017  375293 addons.go:69] Setting default-storageclass=true in profile "embed-certs-903819"
	I0108 22:21:15.469036  375293 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-903819"
	I0108 22:21:15.469052  375293 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 22:21:15.469064  375293 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:21:15.469060  375293 addons.go:69] Setting metrics-server=true in profile "embed-certs-903819"
	I0108 22:21:15.469037  375293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-903819"
	I0108 22:21:15.469111  375293 addons.go:237] Setting addon metrics-server=true in "embed-certs-903819"
	W0108 22:21:15.469128  375293 addons.go:246] addon metrics-server should already be in state true
	I0108 22:21:15.469139  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.469189  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.469584  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469635  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469676  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.469647  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.469585  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469825  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.488818  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0108 22:21:15.489266  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.491196  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39101
	I0108 22:21:15.491253  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0108 22:21:15.491759  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.491787  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.491816  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.492193  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.492365  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.492383  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.492747  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.492790  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.493002  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.493056  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.493670  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.493702  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.494305  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.494329  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.494841  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.495072  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.499830  375293 addons.go:237] Setting addon default-storageclass=true in "embed-certs-903819"
	W0108 22:21:15.499867  375293 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:21:15.499903  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.500396  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.500568  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.516135  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0108 22:21:15.516748  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.517517  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.517566  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.518117  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.518378  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.519282  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0108 22:21:15.520505  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.520596  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.522491  375293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:21:15.521662  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.524042  375293 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:15.524051  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.524059  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:21:15.524081  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.524560  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.524774  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.527237  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.529443  375293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:21:15.528147  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.528787  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.531192  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:21:15.531217  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:21:15.531249  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.531217  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.531343  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.531599  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.531825  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.532078  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.535903  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0108 22:21:15.536161  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.536527  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.536553  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.536618  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.536766  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.536994  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.537194  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.537359  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.537370  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.537426  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.537948  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.538486  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.538508  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.557562  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0108 22:21:15.558072  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.558613  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.558643  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.559096  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.559318  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.561435  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.561769  375293 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:15.561788  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:21:15.561809  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.564959  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.565410  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.565442  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.565628  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.565836  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.565994  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.566145  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.740070  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:21:15.740112  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:21:15.762954  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:15.779320  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:15.819423  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:21:15.821997  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:21:15.822039  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:21:15.911195  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:15.911231  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:21:16.022419  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:16.061550  375293 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-903819" context rescaled to 1 replicas
	I0108 22:21:16.061625  375293 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:21:16.063813  375293 out.go:177] * Verifying Kubernetes components...
	I0108 22:21:12.435066  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:14.936374  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:16.065433  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:17.600634  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.837630321s)
	I0108 22:21:17.600727  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.600751  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.601111  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.601133  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:17.601145  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.601155  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.601162  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.601437  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.601478  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.601496  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:17.658136  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.658160  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.658512  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.658539  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.658556  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.633155  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.813676374s)
	I0108 22:21:18.633329  375293 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0108 22:21:18.633460  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.610999344s)
	I0108 22:21:18.633535  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.633576  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.633728  375293 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.568262314s)
	I0108 22:21:18.633793  375293 node_ready.go:35] waiting up to 6m0s for node "embed-certs-903819" to be "Ready" ...
	I0108 22:21:18.634123  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.634212  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.634247  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.634274  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.634293  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.634767  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.634836  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.634875  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.634901  375293 addons.go:473] Verifying addon metrics-server=true in "embed-certs-903819"
	I0108 22:21:18.638741  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.85936832s)
	I0108 22:21:18.638810  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.638826  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.639227  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.639301  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.639322  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.639333  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.639353  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.639611  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.639643  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.639652  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.641291  375293 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0108 22:21:17.433629  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:19.436354  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:18.642785  375293 addons.go:508] enable addons completed in 3.173862498s: enabled=[default-storageclass metrics-server storage-provisioner]
	I0108 22:21:18.710469  375293 node_ready.go:49] node "embed-certs-903819" has status "Ready":"True"
	I0108 22:21:18.710510  375293 node_ready.go:38] duration metric: took 76.686364ms waiting for node "embed-certs-903819" to be "Ready" ...
	I0108 22:21:18.710526  375293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:18.737405  375293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.747084  375293 pod_ready.go:92] pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.747120  375293 pod_ready.go:81] duration metric: took 1.009672279s waiting for pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.747136  375293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.758191  375293 pod_ready.go:92] pod "etcd-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.758217  375293 pod_ready.go:81] duration metric: took 11.073973ms waiting for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.758227  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.770167  375293 pod_ready.go:92] pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.770199  375293 pod_ready.go:81] duration metric: took 11.962809ms waiting for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.770213  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.778549  375293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.778576  375293 pod_ready.go:81] duration metric: took 8.355574ms waiting for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.778593  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqj9b" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.291841  375293 pod_ready.go:92] pod "kube-proxy-hqj9b" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:20.291889  375293 pod_ready.go:81] duration metric: took 513.287335ms waiting for pod "kube-proxy-hqj9b" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.291907  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.639437  375293 pod_ready.go:92] pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:20.639482  375293 pod_ready.go:81] duration metric: took 347.563689ms waiting for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.639507  375293 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:22.648411  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:21.933418  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:24.435043  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:25.150951  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:27.650444  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:26.937451  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:27.925059  375556 pod_ready.go:81] duration metric: took 4m0.000207907s waiting for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" ...
	E0108 22:21:27.925103  375556 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:21:27.925128  375556 pod_ready.go:38] duration metric: took 4m40.430696194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:27.925167  375556 kubeadm.go:640] restartCluster took 5m4.814420494s
	W0108 22:21:27.925297  375556 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:21:27.925360  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:21:30.149112  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:32.149588  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:34.150894  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:36.649733  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:39.151257  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:41.647739  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:43.145693  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.220300874s)
	I0108 22:21:43.145789  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:43.162489  375556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:21:43.174147  375556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:21:43.184922  375556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:21:43.184985  375556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:21:43.249215  375556 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:21:43.249349  375556 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:21:43.441703  375556 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:21:43.441851  375556 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:21:43.441998  375556 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:21:43.739390  375556 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:21:43.742109  375556 out.go:204]   - Generating certificates and keys ...
	I0108 22:21:43.742213  375556 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:21:43.742298  375556 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:21:43.742469  375556 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:21:43.742561  375556 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:21:43.742651  375556 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:21:43.743428  375556 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:21:43.744699  375556 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:21:43.746015  375556 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:21:43.747206  375556 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:21:43.748318  375556 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:21:43.749156  375556 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:21:43.749237  375556 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:21:43.859844  375556 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:21:44.418300  375556 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:21:44.582066  375556 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:21:44.829395  375556 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:21:44.830276  375556 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:21:44.833494  375556 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:21:44.835724  375556 out.go:204]   - Booting up control plane ...
	I0108 22:21:44.835871  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:21:44.835997  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:21:44.836115  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:21:44.858575  375556 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:21:44.859658  375556 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:21:44.859774  375556 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:21:45.004925  375556 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:21:43.648821  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:46.148491  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:48.152137  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:50.649779  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:54.508960  375556 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503706 seconds
	I0108 22:21:54.509100  375556 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:21:54.534526  375556 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:21:55.088263  375556 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:21:55.088497  375556 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-292054 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:21:55.625246  375556 kubeadm.go:322] [bootstrap-token] Using token: ca3oft.99pjh791kq903kea
	I0108 22:21:55.627406  375556 out.go:204]   - Configuring RBAC rules ...
	I0108 22:21:55.627535  375556 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:21:55.635469  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:21:55.658589  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:21:55.664394  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:21:55.670923  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:21:55.678315  375556 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:21:55.707544  375556 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:21:56.011289  375556 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:21:56.074068  375556 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:21:56.074122  375556 kubeadm.go:322] 
	I0108 22:21:56.074195  375556 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:21:56.074210  375556 kubeadm.go:322] 
	I0108 22:21:56.074305  375556 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:21:56.074315  375556 kubeadm.go:322] 
	I0108 22:21:56.074346  375556 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:21:56.074474  375556 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:21:56.074550  375556 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:21:56.074560  375556 kubeadm.go:322] 
	I0108 22:21:56.074635  375556 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:21:56.074649  375556 kubeadm.go:322] 
	I0108 22:21:56.074713  375556 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:21:56.074723  375556 kubeadm.go:322] 
	I0108 22:21:56.074810  375556 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:21:56.074933  375556 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:21:56.075027  375556 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:21:56.075037  375556 kubeadm.go:322] 
	I0108 22:21:56.075161  375556 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:21:56.075285  375556 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:21:56.075295  375556 kubeadm.go:322] 
	I0108 22:21:56.075430  375556 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ca3oft.99pjh791kq903kea \
	I0108 22:21:56.075574  375556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:21:56.075612  375556 kubeadm.go:322] 	--control-plane 
	I0108 22:21:56.075621  375556 kubeadm.go:322] 
	I0108 22:21:56.075733  375556 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:21:56.075744  375556 kubeadm.go:322] 
	I0108 22:21:56.075843  375556 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ca3oft.99pjh791kq903kea \
	I0108 22:21:56.075969  375556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:21:56.076235  375556 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:21:56.076281  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:21:56.076299  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:21:56.078385  375556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:21:56.079942  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:21:53.149618  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:55.649585  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:57.650103  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:56.112245  375556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:21:56.183435  375556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:21:56.183568  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:56.183570  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=default-k8s-diff-port-292054 minikube.k8s.io/updated_at=2024_01_08T22_21_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:56.217296  375556 ops.go:34] apiserver oom_adj: -16
	I0108 22:21:56.721884  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:57.222982  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:57.722219  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:58.222712  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:58.722544  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:59.222082  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:59.722808  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.222562  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.722284  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.149913  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:02.650967  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:01.222401  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:01.722606  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:02.222313  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:02.722582  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:03.222793  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:03.722359  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:04.222245  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:04.722706  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.222841  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.722871  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.148941  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:07.149461  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:06.222648  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:06.722581  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:07.222288  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:07.722274  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.222744  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.722856  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.963467  375556 kubeadm.go:1088] duration metric: took 12.779973028s to wait for elevateKubeSystemPrivileges.
	I0108 22:22:08.963522  375556 kubeadm.go:406] StartCluster complete in 5m45.912753673s
	I0108 22:22:08.963553  375556 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:22:08.963665  375556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:22:08.966435  375556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:22:08.966775  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:22:08.966928  375556 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:22:08.967034  375556 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967075  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:22:08.967095  375556 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.967104  375556 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:22:08.967152  375556 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967183  375556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-292054"
	I0108 22:22:08.967192  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.967271  375556 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967300  375556 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.967310  375556 addons.go:246] addon metrics-server should already be in state true
	I0108 22:22:08.967375  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.967667  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967695  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.967756  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967769  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967779  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.967796  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.986925  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0108 22:22:08.987023  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0108 22:22:08.987549  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.987698  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.988282  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.988313  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.988483  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.988508  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.988606  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0108 22:22:08.989056  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.989111  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.989337  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:08.989834  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.989872  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.990158  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.990780  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.990796  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.991245  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.991880  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.991911  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.995239  375556 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.995265  375556 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:22:08.995290  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.995820  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.995865  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:09.011939  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0108 22:22:09.012468  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.013299  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.013318  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.013724  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.013935  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I0108 22:22:09.014168  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.014906  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.015481  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.015498  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.015842  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.016396  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:09.016424  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:09.016659  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.016741  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I0108 22:22:09.019481  375556 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:22:09.017701  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.021632  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:22:09.021669  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:22:09.021704  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.022354  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.022387  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.022852  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.023158  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.025362  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.027347  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.029567  375556 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:22:09.027877  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.028367  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.032055  375556 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:22:09.032070  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:22:09.032103  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.032160  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.032368  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.032489  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.032591  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.037266  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.037969  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.038003  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.038588  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.038650  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0108 22:22:09.038933  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.039112  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.039299  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.039313  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.039936  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.039974  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.040395  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.040652  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.042584  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.043735  375556 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:22:09.043754  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:22:09.043774  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.047511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.047647  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.047668  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.047828  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.048115  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.048267  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.048432  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.273503  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:22:09.286359  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:22:09.286398  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:22:09.395127  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:22:09.395521  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:22:09.399318  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:22:09.399351  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:22:09.529413  375556 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-292054" context rescaled to 1 replicas
	I0108 22:22:09.529456  375556 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:22:09.531970  375556 out.go:177] * Verifying Kubernetes components...
	I0108 22:22:09.533935  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:22:09.608669  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:22:09.608706  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:22:09.762095  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:22:11.642700  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369133486s)
	I0108 22:22:11.642752  375556 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0108 22:22:12.525251  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.130061811s)
	I0108 22:22:12.525333  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525335  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.129764757s)
	I0108 22:22:12.525352  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.525383  375556 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.99138928s)
	I0108 22:22:12.525439  375556 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-292054" to be "Ready" ...
	I0108 22:22:12.525390  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.525785  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.525799  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.525810  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525820  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.526200  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526208  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526224  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.526234  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.526244  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.526250  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526320  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526345  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.526627  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526640  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526644  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.600599  375556 node_ready.go:49] node "default-k8s-diff-port-292054" has status "Ready":"True"
	I0108 22:22:12.600630  375556 node_ready.go:38] duration metric: took 75.170013ms waiting for node "default-k8s-diff-port-292054" to be "Ready" ...
	I0108 22:22:12.600642  375556 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:22:12.607695  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.607735  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.608178  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.608205  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.698479  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.93630517s)
	I0108 22:22:12.698597  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.698624  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.699090  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.699114  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.699129  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.699141  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.699570  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.699611  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.699628  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.699642  375556 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-292054"
	I0108 22:22:12.702579  375556 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 22:22:09.152248  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:11.649021  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:12.704051  375556 addons.go:508] enable addons completed in 3.737129591s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 22:22:12.730733  375556 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.740214  375556 pod_ready.go:92] pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.740241  375556 pod_ready.go:81] duration metric: took 1.009466865s waiting for pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.740252  375556 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.749855  375556 pod_ready.go:92] pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.749884  375556 pod_ready.go:81] duration metric: took 9.624914ms waiting for pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.749897  375556 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.774037  375556 pod_ready.go:92] pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.774082  375556 pod_ready.go:81] duration metric: took 24.173765ms waiting for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.774099  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.793737  375556 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.793763  375556 pod_ready.go:81] duration metric: took 19.654354ms waiting for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.793786  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.802646  375556 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.802675  375556 pod_ready.go:81] duration metric: took 8.880262ms waiting for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.802686  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bwmkb" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:14.935671  375556 pod_ready.go:92] pod "kube-proxy-bwmkb" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:14.935701  375556 pod_ready.go:81] duration metric: took 1.133008415s waiting for pod "kube-proxy-bwmkb" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:14.935712  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:15.337751  375556 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:15.337785  375556 pod_ready.go:81] duration metric: took 402.065003ms waiting for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:15.337799  375556 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.651032  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:16.150676  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:17.347997  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:19.848727  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:18.651581  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:21.153888  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:22.348002  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:24.348563  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:23.159095  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:25.648575  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:27.650462  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:26.847900  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:28.848176  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:30.148277  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:32.148917  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:31.353639  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:33.847750  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:34.649869  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:36.650396  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:36.349185  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:38.846642  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:40.851501  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:39.148741  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:41.150479  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:43.348737  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:45.848448  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:43.649911  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:46.149760  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:48.348731  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:50.849503  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:48.648402  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:50.649986  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:53.349307  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:55.349864  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:53.152397  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:55.651270  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:57.652287  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:57.854209  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:00.347211  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:59.655447  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:02.151802  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:02.351659  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:04.848930  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:04.650649  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:07.148845  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:06.864466  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:09.349319  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:09.150267  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:11.647897  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:11.350470  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:13.846976  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:13.648246  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:15.653072  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:16.348755  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:18.847624  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:20.850947  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:18.147230  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:20.148799  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:22.150181  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:22.854027  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:25.347172  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:24.648528  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:26.650104  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:27.350880  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:29.847065  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:28.651914  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:31.149983  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:31.849609  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:33.849918  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:35.852770  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:33.648054  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:35.650693  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:38.346376  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:40.347831  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:38.148131  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:40.149293  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:42.151041  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:42.845779  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:44.849417  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:44.655548  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:47.150423  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:46.850811  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:49.347304  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:49.652923  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:52.149820  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:51.348180  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:53.846474  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:55.847511  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:54.649820  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:57.149372  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:57.849233  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:00.348798  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:59.154056  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:01.649087  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:02.349247  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:04.350582  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:03.650176  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:06.153560  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:06.848567  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:09.349670  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:08.649461  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:11.149266  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:11.847194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:13.847282  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:15.849466  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:13.650152  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:15.653477  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:17.849683  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:20.348186  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:18.150536  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:20.650961  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:22.849232  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:25.349020  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:23.149893  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:25.151776  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:27.649498  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:27.848253  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:29.849644  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:29.651074  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:32.151463  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:32.348246  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:34.349140  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:34.650582  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:36.651676  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:36.848220  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:38.848664  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:40.848971  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:39.152183  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:41.648320  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:42.849338  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:45.347960  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:44.150739  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:46.649332  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:47.350030  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:49.847947  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:48.650293  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:50.650602  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:52.344857  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:54.347419  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:53.149776  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:55.150342  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:57.648269  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:56.347866  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:58.350081  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:00.848175  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:59.650591  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:02.149598  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:03.349797  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:05.849888  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:04.648771  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:06.651847  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:08.346160  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:10.348673  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:09.149033  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:11.149301  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:12.352279  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:14.846849  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:13.153318  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:15.651109  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:16.849657  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:19.347996  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:18.150751  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:20.650211  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:20.650242  375293 pod_ready.go:81] duration metric: took 4m0.010726332s waiting for pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace to be "Ready" ...
	E0108 22:25:20.650252  375293 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 22:25:20.650259  375293 pod_ready.go:38] duration metric: took 4m1.939720475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:25:20.650300  375293 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:25:20.650336  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:20.650406  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:20.714451  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:20.714500  375293 cri.go:89] found id: ""
	I0108 22:25:20.714513  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:20.714621  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.720237  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:20.720367  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:20.767857  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:20.767904  375293 cri.go:89] found id: ""
	I0108 22:25:20.767916  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:20.767995  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.772859  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:20.772969  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:20.817193  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:20.817225  375293 cri.go:89] found id: ""
	I0108 22:25:20.817236  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:20.817310  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.824003  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:20.824113  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:20.884204  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:20.884252  375293 cri.go:89] found id: ""
	I0108 22:25:20.884263  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:20.884335  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.889658  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:20.889756  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:20.949423  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:20.949460  375293 cri.go:89] found id: ""
	I0108 22:25:20.949472  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:20.949543  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.954856  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:20.954944  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:21.011490  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:21.011538  375293 cri.go:89] found id: ""
	I0108 22:25:21.011551  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:21.011629  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:21.017544  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:21.017638  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:21.066267  375293 cri.go:89] found id: ""
	I0108 22:25:21.066310  375293 logs.go:284] 0 containers: []
	W0108 22:25:21.066322  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:21.066331  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:21.066404  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:21.123537  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:21.123571  375293 cri.go:89] found id: ""
	I0108 22:25:21.123583  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:21.123660  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:21.129269  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:21.129309  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:21.200266  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:21.200308  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:21.246669  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:21.246705  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:21.265861  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:21.265908  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:21.327968  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:21.328016  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:21.386940  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:21.386986  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:21.443896  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:21.443941  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:21.496699  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:21.496746  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:21.962773  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:21.962820  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:22.024288  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:22.024330  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:22.133928  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:22.133976  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:22.301006  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:22.301051  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:21.348655  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:23.350759  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:25.351301  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:24.847470  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:25:24.867718  375293 api_server.go:72] duration metric: took 4m8.80605206s to wait for apiserver process to appear ...
	I0108 22:25:24.867750  375293 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:25:24.867788  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:24.867842  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:24.918048  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:24.918090  375293 cri.go:89] found id: ""
	I0108 22:25:24.918104  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:24.918196  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:24.923984  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:24.924096  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:24.981033  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:24.981058  375293 cri.go:89] found id: ""
	I0108 22:25:24.981066  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:24.981116  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:24.985729  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:24.985802  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:25.038522  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:25.038558  375293 cri.go:89] found id: ""
	I0108 22:25:25.038570  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:25.038637  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.043106  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:25.043218  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:25.100189  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:25.100218  375293 cri.go:89] found id: ""
	I0108 22:25:25.100230  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:25.100298  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.107135  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:25.107252  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:25.155243  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:25.155276  375293 cri.go:89] found id: ""
	I0108 22:25:25.155288  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:25.155354  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.160457  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:25.160559  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:25.214754  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:25.214788  375293 cri.go:89] found id: ""
	I0108 22:25:25.214799  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:25.214855  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.219504  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:25.219595  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:25.267255  375293 cri.go:89] found id: ""
	I0108 22:25:25.267302  375293 logs.go:284] 0 containers: []
	W0108 22:25:25.267318  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:25.267329  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:25.267442  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:25.322636  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:25.322668  375293 cri.go:89] found id: ""
	I0108 22:25:25.322679  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:25.322750  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.327559  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:25.327592  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:25.396299  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:25.396354  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:25.447121  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:25.447188  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:25.501357  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:25.501413  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:25.572678  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:25.572741  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:25.624203  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:25.624248  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:26.021189  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:26.021250  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:26.122845  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:26.122893  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:26.297704  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:26.297746  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:26.361771  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:26.361826  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:26.422252  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:26.422292  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:26.479602  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:26.479641  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:27.848906  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:30.348452  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:28.997002  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:25:29.008040  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I0108 22:25:29.009729  375293 api_server.go:141] control plane version: v1.28.4
	I0108 22:25:29.009758  375293 api_server.go:131] duration metric: took 4.142001296s to wait for apiserver health ...
	I0108 22:25:29.009770  375293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:25:29.009807  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:29.009872  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:29.064244  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:29.064280  375293 cri.go:89] found id: ""
	I0108 22:25:29.064292  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:29.064357  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.069801  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:29.069900  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:29.115294  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:29.115328  375293 cri.go:89] found id: ""
	I0108 22:25:29.115338  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:29.115426  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.120512  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:29.120600  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:29.173571  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:29.173600  375293 cri.go:89] found id: ""
	I0108 22:25:29.173609  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:29.173670  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.179649  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:29.179724  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:29.230220  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:29.230272  375293 cri.go:89] found id: ""
	I0108 22:25:29.230286  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:29.230384  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.235437  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:29.235540  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:29.280861  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:29.280892  375293 cri.go:89] found id: ""
	I0108 22:25:29.280904  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:29.280974  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.286131  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:29.286247  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:29.337665  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:29.337700  375293 cri.go:89] found id: ""
	I0108 22:25:29.337711  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:29.337765  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.343912  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:29.344009  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:29.400428  375293 cri.go:89] found id: ""
	I0108 22:25:29.400458  375293 logs.go:284] 0 containers: []
	W0108 22:25:29.400466  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:29.400476  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:29.400532  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:29.458375  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:29.458416  375293 cri.go:89] found id: ""
	I0108 22:25:29.458428  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:29.458503  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.464513  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:29.464555  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:29.809503  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:29.809550  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:29.916786  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:29.916864  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:30.077876  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:30.077929  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:30.139380  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:30.139445  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:30.186829  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:30.186861  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:30.244185  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:30.244230  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:30.300429  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:30.300488  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:30.316880  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:30.316920  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:30.370537  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:30.370581  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:30.419043  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:30.419093  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:30.482758  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:30.482804  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:33.043083  375293 system_pods.go:59] 8 kube-system pods found
	I0108 22:25:33.043134  375293 system_pods.go:61] "coredns-5dd5756b68-jbz6n" [562faf84-b986-4f0e-97cd-41aa5ac7ea17] Running
	I0108 22:25:33.043139  375293 system_pods.go:61] "etcd-embed-certs-903819" [68146164-7115-4489-8010-32774433564a] Running
	I0108 22:25:33.043143  375293 system_pods.go:61] "kube-apiserver-embed-certs-903819" [367d0612-bd4d-448f-84f2-118afcb9d095] Running
	I0108 22:25:33.043148  375293 system_pods.go:61] "kube-controller-manager-embed-certs-903819" [43c3944a-3dfd-44ce-ba68-baebbced4406] Running
	I0108 22:25:33.043152  375293 system_pods.go:61] "kube-proxy-hqj9b" [14b3f3bd-1d65-4382-adc2-09344b54463d] Running
	I0108 22:25:33.043157  375293 system_pods.go:61] "kube-scheduler-embed-certs-903819" [9c004a9c-c77a-4ee5-970d-db41ddc26439] Running
	I0108 22:25:33.043167  375293 system_pods.go:61] "metrics-server-57f55c9bc5-qhjlv" [f1bff39b-c944-4de0-a5b8-eb239e91c6db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:25:33.043172  375293 system_pods.go:61] "storage-provisioner" [949c6275-6836-4035-89f5-f2d2c2caaa89] Running
	I0108 22:25:33.043180  375293 system_pods.go:74] duration metric: took 4.033402969s to wait for pod list to return data ...
	I0108 22:25:33.043189  375293 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:25:33.047488  375293 default_sa.go:45] found service account: "default"
	I0108 22:25:33.047526  375293 default_sa.go:55] duration metric: took 4.328925ms for default service account to be created ...
	I0108 22:25:33.047540  375293 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:25:33.055793  375293 system_pods.go:86] 8 kube-system pods found
	I0108 22:25:33.055824  375293 system_pods.go:89] "coredns-5dd5756b68-jbz6n" [562faf84-b986-4f0e-97cd-41aa5ac7ea17] Running
	I0108 22:25:33.055829  375293 system_pods.go:89] "etcd-embed-certs-903819" [68146164-7115-4489-8010-32774433564a] Running
	I0108 22:25:33.055834  375293 system_pods.go:89] "kube-apiserver-embed-certs-903819" [367d0612-bd4d-448f-84f2-118afcb9d095] Running
	I0108 22:25:33.055838  375293 system_pods.go:89] "kube-controller-manager-embed-certs-903819" [43c3944a-3dfd-44ce-ba68-baebbced4406] Running
	I0108 22:25:33.055841  375293 system_pods.go:89] "kube-proxy-hqj9b" [14b3f3bd-1d65-4382-adc2-09344b54463d] Running
	I0108 22:25:33.055845  375293 system_pods.go:89] "kube-scheduler-embed-certs-903819" [9c004a9c-c77a-4ee5-970d-db41ddc26439] Running
	I0108 22:25:33.055852  375293 system_pods.go:89] "metrics-server-57f55c9bc5-qhjlv" [f1bff39b-c944-4de0-a5b8-eb239e91c6db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:25:33.055859  375293 system_pods.go:89] "storage-provisioner" [949c6275-6836-4035-89f5-f2d2c2caaa89] Running
	I0108 22:25:33.055872  375293 system_pods.go:126] duration metric: took 8.323722ms to wait for k8s-apps to be running ...
	I0108 22:25:33.055881  375293 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:25:33.055939  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:25:33.074598  375293 system_svc.go:56] duration metric: took 18.695286ms WaitForService to wait for kubelet.
	I0108 22:25:33.074637  375293 kubeadm.go:581] duration metric: took 4m17.012976103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:25:33.074671  375293 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:25:33.079188  375293 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:25:33.079227  375293 node_conditions.go:123] node cpu capacity is 2
	I0108 22:25:33.079246  375293 node_conditions.go:105] duration metric: took 4.559946ms to run NodePressure ...
	I0108 22:25:33.079261  375293 start.go:228] waiting for startup goroutines ...
	I0108 22:25:33.079270  375293 start.go:233] waiting for cluster config update ...
	I0108 22:25:33.079283  375293 start.go:242] writing updated cluster config ...
	I0108 22:25:33.079792  375293 ssh_runner.go:195] Run: rm -f paused
	I0108 22:25:33.144148  375293 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:25:33.146897  375293 out.go:177] * Done! kubectl is now configured to use "embed-certs-903819" cluster and "default" namespace by default
	I0108 22:25:32.349693  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:34.845955  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:36.851909  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:39.348575  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:41.350957  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:43.848565  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:46.348360  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:48.847346  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:51.346764  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:53.849331  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:56.349683  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:58.350457  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:00.847803  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:03.352522  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:05.844769  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:07.846346  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:09.848453  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:11.850250  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:14.347576  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:15.349616  375556 pod_ready.go:81] duration metric: took 4m0.011802861s waiting for pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace to be "Ready" ...
	E0108 22:26:15.349643  375556 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 22:26:15.349651  375556 pod_ready.go:38] duration metric: took 4m2.748998751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:26:15.349666  375556 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:26:15.349720  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:15.349773  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:15.414233  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:15.414273  375556 cri.go:89] found id: ""
	I0108 22:26:15.414286  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:15.414367  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.421348  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:15.421439  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:15.480484  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:15.480508  375556 cri.go:89] found id: ""
	I0108 22:26:15.480517  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:15.480569  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.486049  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:15.486125  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:15.551549  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:15.551588  375556 cri.go:89] found id: ""
	I0108 22:26:15.551600  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:15.551665  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.556950  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:15.557035  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:15.607375  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:15.607417  375556 cri.go:89] found id: ""
	I0108 22:26:15.607433  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:15.607530  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.613182  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:15.613253  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:15.663780  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:15.663805  375556 cri.go:89] found id: ""
	I0108 22:26:15.663813  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:15.663882  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.668629  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:15.668748  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:15.722341  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:15.722370  375556 cri.go:89] found id: ""
	I0108 22:26:15.722380  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:15.722453  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.727974  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:15.728089  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:15.782298  375556 cri.go:89] found id: ""
	I0108 22:26:15.782331  375556 logs.go:284] 0 containers: []
	W0108 22:26:15.782349  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:15.782358  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:15.782436  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:15.836150  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:15.836194  375556 cri.go:89] found id: ""
	I0108 22:26:15.836207  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:15.836307  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.842152  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:15.842184  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:15.900314  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:15.900378  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:15.974860  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:15.974903  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:16.021465  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:16.021529  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:16.477647  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:16.477706  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:16.588562  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:16.588615  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:16.604310  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:16.604383  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:16.770738  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:16.770778  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:16.835271  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:16.835320  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:16.899297  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:16.899354  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:16.957508  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:16.957549  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:17.001214  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:17.001255  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:19.561271  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:26:19.578731  375556 api_server.go:72] duration metric: took 4m10.049236985s to wait for apiserver process to appear ...
	I0108 22:26:19.578768  375556 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:26:19.578821  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:19.578897  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:19.630380  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:19.630410  375556 cri.go:89] found id: ""
	I0108 22:26:19.630422  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:19.630496  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.635902  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:19.635998  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:19.682023  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:19.682057  375556 cri.go:89] found id: ""
	I0108 22:26:19.682072  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:19.682143  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.688443  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:19.688567  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:19.738612  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:19.738651  375556 cri.go:89] found id: ""
	I0108 22:26:19.738664  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:19.738790  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.745590  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:19.745726  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:19.796647  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:19.796674  375556 cri.go:89] found id: ""
	I0108 22:26:19.796685  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:19.796747  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.801789  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:19.801872  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:19.846026  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:19.846060  375556 cri.go:89] found id: ""
	I0108 22:26:19.846070  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:19.846150  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.851227  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:19.851299  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:19.906135  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:19.906173  375556 cri.go:89] found id: ""
	I0108 22:26:19.906184  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:19.906267  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.911914  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:19.912048  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:19.960064  375556 cri.go:89] found id: ""
	I0108 22:26:19.960104  375556 logs.go:284] 0 containers: []
	W0108 22:26:19.960117  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:19.960126  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:19.960198  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:20.010136  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:20.010171  375556 cri.go:89] found id: ""
	I0108 22:26:20.010181  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:20.010256  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:20.015368  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:20.015402  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:20.122508  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:20.122575  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:20.272565  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:20.272610  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:20.335281  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:20.335334  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:20.384028  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:20.384088  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:20.779192  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:20.779250  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:20.795137  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:20.795170  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:20.863312  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:20.863395  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:20.918084  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:20.918132  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:20.966066  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:20.966108  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:21.030610  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:21.030704  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:21.083525  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:21.083567  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:23.662287  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:26:23.671857  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 200:
	ok
	I0108 22:26:23.673883  375556 api_server.go:141] control plane version: v1.28.4
	I0108 22:26:23.673919  375556 api_server.go:131] duration metric: took 4.095141482s to wait for apiserver health ...
	I0108 22:26:23.673932  375556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:26:23.673967  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:23.674045  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:23.733069  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:23.733098  375556 cri.go:89] found id: ""
	I0108 22:26:23.733109  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:23.733168  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.739866  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:23.739960  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:23.807666  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:23.807693  375556 cri.go:89] found id: ""
	I0108 22:26:23.807704  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:23.807765  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.813449  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:23.813543  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:23.876403  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:23.876431  375556 cri.go:89] found id: ""
	I0108 22:26:23.876442  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:23.876511  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.885128  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:23.885232  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:23.953100  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:23.953129  375556 cri.go:89] found id: ""
	I0108 22:26:23.953139  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:23.953211  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.960146  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:23.960246  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:24.022581  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:24.022608  375556 cri.go:89] found id: ""
	I0108 22:26:24.022616  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:24.022669  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.029307  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:24.029399  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:24.088026  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:24.088063  375556 cri.go:89] found id: ""
	I0108 22:26:24.088074  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:24.088151  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.094051  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:24.094175  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:24.156867  375556 cri.go:89] found id: ""
	I0108 22:26:24.156902  375556 logs.go:284] 0 containers: []
	W0108 22:26:24.156914  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:24.156924  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:24.157020  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:24.219558  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:24.219581  375556 cri.go:89] found id: ""
	I0108 22:26:24.219589  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:24.219641  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.224823  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:24.224866  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:24.321726  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:24.321777  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:24.749669  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:24.749737  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:24.821645  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:24.821690  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:24.883279  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:24.883325  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:24.942199  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:24.942253  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:25.003721  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:25.003766  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:25.051208  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:25.051241  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:25.102533  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:25.102580  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:25.158556  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:25.158610  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:25.263571  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:25.263618  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:25.281380  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:25.281414  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:27.948731  375556 system_pods.go:59] 8 kube-system pods found
	I0108 22:26:27.948767  375556 system_pods.go:61] "coredns-5dd5756b68-r27zw" [c82dae88-118a-4e13-a714-1240d48dfc4e] Running
	I0108 22:26:27.948774  375556 system_pods.go:61] "etcd-default-k8s-diff-port-292054" [d8145b74-cc40-40eb-b9e2-5a19e096e5f7] Running
	I0108 22:26:27.948782  375556 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-292054" [5bb945e6-e633-4fdc-bbec-16c72cb3ca88] Running
	I0108 22:26:27.948787  375556 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-292054" [8d376b79-f3ab-4f74-a927-e3f1775853c0] Running
	I0108 22:26:27.948794  375556 system_pods.go:61] "kube-proxy-bwmkb" [c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2] Running
	I0108 22:26:27.948800  375556 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-292054" [d125cdbe-49e2-48af-bcf8-44d514cd4a1c] Running
	I0108 22:26:27.948811  375556 system_pods.go:61] "metrics-server-57f55c9bc5-jm9lg" [b94afab5-f573-4ed1-bc29-64eb8e90c574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:26:27.948827  375556 system_pods.go:61] "storage-provisioner" [05c2430d-d84e-415e-83b3-c32e7635fe74] Running
	I0108 22:26:27.948839  375556 system_pods.go:74] duration metric: took 4.274897836s to wait for pod list to return data ...
	I0108 22:26:27.948852  375556 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:26:27.952207  375556 default_sa.go:45] found service account: "default"
	I0108 22:26:27.952241  375556 default_sa.go:55] duration metric: took 3.378283ms for default service account to be created ...
	I0108 22:26:27.952252  375556 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:26:27.958708  375556 system_pods.go:86] 8 kube-system pods found
	I0108 22:26:27.958744  375556 system_pods.go:89] "coredns-5dd5756b68-r27zw" [c82dae88-118a-4e13-a714-1240d48dfc4e] Running
	I0108 22:26:27.958752  375556 system_pods.go:89] "etcd-default-k8s-diff-port-292054" [d8145b74-cc40-40eb-b9e2-5a19e096e5f7] Running
	I0108 22:26:27.958757  375556 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-292054" [5bb945e6-e633-4fdc-bbec-16c72cb3ca88] Running
	I0108 22:26:27.958763  375556 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-292054" [8d376b79-f3ab-4f74-a927-e3f1775853c0] Running
	I0108 22:26:27.958767  375556 system_pods.go:89] "kube-proxy-bwmkb" [c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2] Running
	I0108 22:26:27.958772  375556 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-292054" [d125cdbe-49e2-48af-bcf8-44d514cd4a1c] Running
	I0108 22:26:27.958849  375556 system_pods.go:89] "metrics-server-57f55c9bc5-jm9lg" [b94afab5-f573-4ed1-bc29-64eb8e90c574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:26:27.958860  375556 system_pods.go:89] "storage-provisioner" [05c2430d-d84e-415e-83b3-c32e7635fe74] Running
	I0108 22:26:27.958870  375556 system_pods.go:126] duration metric: took 6.613305ms to wait for k8s-apps to be running ...
	I0108 22:26:27.958892  375556 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:26:27.958967  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:26:27.979435  375556 system_svc.go:56] duration metric: took 20.53748ms WaitForService to wait for kubelet.
	I0108 22:26:27.979474  375556 kubeadm.go:581] duration metric: took 4m18.449992338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:26:27.979500  375556 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:26:27.983117  375556 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:26:27.983146  375556 node_conditions.go:123] node cpu capacity is 2
	I0108 22:26:27.983159  375556 node_conditions.go:105] duration metric: took 3.652979ms to run NodePressure ...
	I0108 22:26:27.983171  375556 start.go:228] waiting for startup goroutines ...
	I0108 22:26:27.983177  375556 start.go:233] waiting for cluster config update ...
	I0108 22:26:27.983187  375556 start.go:242] writing updated cluster config ...
	I0108 22:26:27.983521  375556 ssh_runner.go:195] Run: rm -f paused
	I0108 22:26:28.042279  375556 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:26:28.044728  375556 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-292054" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:15:20 UTC, ends at Mon 2024-01-08 22:30:10 UTC. --
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.051269707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753010051254091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=9f5f755c-f7c3-488a-b9bf-a9e116d826b4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.051943893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b93863d-51ff-46f1-b7b7-a2c703b944d8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.052023877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b93863d-51ff-46f1-b7b7-a2c703b944d8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.052305180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b93863d-51ff-46f1-b7b7-a2c703b944d8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.108853664Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7c6b0199-0edb-4533-acaf-fca44c396184 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.108922684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7c6b0199-0edb-4533-acaf-fca44c396184 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.111471966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=259b0651-cfb5-4395-9ea8-72bf7ee9ac6e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.111939300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753010111924261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=259b0651-cfb5-4395-9ea8-72bf7ee9ac6e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.113172782Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=16ec1dbe-7618-4e75-adfa-ad5545fbb0a9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.113317238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=16ec1dbe-7618-4e75-adfa-ad5545fbb0a9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.113477001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=16ec1dbe-7618-4e75-adfa-ad5545fbb0a9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.164481785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=053a5277-993b-424a-b4c9-d35f25147e06 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.164550799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=053a5277-993b-424a-b4c9-d35f25147e06 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.166078981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bb62090e-35a2-4fb3-81bd-094578bb6647 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.166621706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753010166597728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=bb62090e-35a2-4fb3-81bd-094578bb6647 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.167591904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=704e49db-6799-4f79-8a98-ca781a74a1b6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.167684350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=704e49db-6799-4f79-8a98-ca781a74a1b6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.168022060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=704e49db-6799-4f79-8a98-ca781a74a1b6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.213874222Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=24844978-2234-4368-8ac9-00f7fcd1580a name=/runtime.v1.RuntimeService/Version
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.213949713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=24844978-2234-4368-8ac9-00f7fcd1580a name=/runtime.v1.RuntimeService/Version
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.215456379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=95cc343a-d6a7-4bb3-a478-a475629629c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.215913509Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753010215893736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=95cc343a-d6a7-4bb3-a478-a475629629c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.216888153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=92a2ebb0-5d4f-4e2a-950f-f1fa4a56c8ac name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.216942835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=92a2ebb0-5d4f-4e2a-950f-f1fa4a56c8ac name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:30:10 no-preload-675668 crio[728]: time="2024-01-08 22:30:10.217179405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=92a2ebb0-5d4f-4e2a-950f-f1fa4a56c8ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e15e1c41230c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   adfcbf086da7f       storage-provisioner
	93c09e966efd8       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   054b819514e40       kube-proxy-b2nx2
	e5f90e1ab3c3f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1574fec38aef2       coredns-76f75df574-q6x86
	b18c1aa940c39       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   6d1529b8b59b4       etcd-no-preload-675668
	9d104fdafcd88       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   7d811d9bcf646       kube-controller-manager-no-preload-675668
	6082f16eb29f6       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   437bddedb8cde       kube-scheduler-no-preload-675668
	d24f3f60a2148       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   5224f1f876a48       kube-apiserver-no-preload-675668
	
	
	==> coredns [e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               no-preload-675668
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-675668
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=no-preload-675668
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_20_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:20:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-675668
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:30:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:26:16 +0000   Mon, 08 Jan 2024 22:20:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:26:16 +0000   Mon, 08 Jan 2024 22:20:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:26:16 +0000   Mon, 08 Jan 2024 22:20:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:26:16 +0000   Mon, 08 Jan 2024 22:20:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.153
	  Hostname:    no-preload-675668
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5be65de79214ccfa8a782e6d782b105
	  System UUID:                a5be65de-7921-4ccf-a8a7-82e6d782b105
	  Boot ID:                    cb17c24e-144a-4314-9c42-d7cf36b13e5e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-q6x86                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-675668                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-no-preload-675668             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-no-preload-675668    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-b2nx2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-no-preload-675668             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 metrics-server-57f55c9bc5-vb2kj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m32s (x8 over 9m32s)  kubelet          Node no-preload-675668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m32s (x8 over 9m32s)  kubelet          Node no-preload-675668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m32s (x7 over 9m32s)  kubelet          Node no-preload-675668 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node no-preload-675668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node no-preload-675668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node no-preload-675668 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m9s                   node-controller  Node no-preload-675668 event: Registered Node no-preload-675668 in Controller
	
	
	==> dmesg <==
	[Jan 8 22:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072089] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.557742] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.883403] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147624] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.606727] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.208071] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.117109] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.166835] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.112168] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[  +0.250004] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[ +30.790938] systemd-fstab-generator[1342]: Ignoring "noauto" for root device
	[Jan 8 22:16] kauditd_printk_skb: 29 callbacks suppressed
	[Jan 8 22:20] systemd-fstab-generator[3911]: Ignoring "noauto" for root device
	[ +10.383523] systemd-fstab-generator[4243]: Ignoring "noauto" for root device
	[Jan 8 22:21] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519] <==
	{"level":"info","ts":"2024-01-08T22:20:42.095638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 switched to configuration voters=(9861232620691677522)"}
	{"level":"info","ts":"2024-01-08T22:20:42.097806Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7dd884e79d7a6c6","local-member-id":"88da22d24bd26152","added-peer-id":"88da22d24bd26152","added-peer-peer-urls":["https://192.168.61.153:2380"]}
	{"level":"info","ts":"2024-01-08T22:20:42.152058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T22:20:42.152418Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"88da22d24bd26152","initial-advertise-peer-urls":["https://192.168.61.153:2380"],"listen-peer-urls":["https://192.168.61.153:2380"],"advertise-client-urls":["https://192.168.61.153:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.153:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T22:20:42.15247Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T22:20:42.152655Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.153:2380"}
	{"level":"info","ts":"2024-01-08T22:20:42.152671Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.153:2380"}
	{"level":"info","ts":"2024-01-08T22:20:42.523326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:42.523446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:42.523473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 received MsgPreVoteResp from 88da22d24bd26152 at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:42.523488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:42.523494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 received MsgVoteResp from 88da22d24bd26152 at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:42.523504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:42.523513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 88da22d24bd26152 elected leader 88da22d24bd26152 at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:42.525482Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"88da22d24bd26152","local-member-attributes":"{Name:no-preload-675668 ClientURLs:[https://192.168.61.153:2379]}","request-path":"/0/members/88da22d24bd26152/attributes","cluster-id":"7dd884e79d7a6c6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T22:20:42.525797Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:20:42.526026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:20:42.526189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T22:20:42.526223Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T22:20:42.526318Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:42.52797Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7dd884e79d7a6c6","local-member-id":"88da22d24bd26152","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:42.52811Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:42.528172Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:42.529983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T22:20:42.535487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.153:2379"}
	
	
	==> kernel <==
	 22:30:10 up 14 min,  0 users,  load average: 0.02, 0.15, 0.16
	Linux no-preload-675668 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e] <==
	I0108 22:24:04.940657       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:25:44.324263       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:25:44.324451       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0108 22:25:45.325268       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:25:45.325391       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:25:45.325409       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:25:45.326820       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:25:45.327258       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:25:45.327418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:26:45.326657       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:26:45.327111       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:26:45.327177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:26:45.328085       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:26:45.328187       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:26:45.328198       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:28:45.327636       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:28:45.328113       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:28:45.328153       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:28:45.328379       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:28:45.328476       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:28:45.329888       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368] <==
	I0108 22:24:32.399710       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:25:01.932306       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:25:02.411289       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:25:31.942509       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:25:32.422544       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:26:01.950403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:26:02.433167       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:26:31.957336       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:26:32.447114       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:27:01.964198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:27:02.456609       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 22:27:05.510592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="444.334µs"
	I0108 22:27:18.511928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="118.185µs"
	E0108 22:27:31.971537       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:27:32.472429       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:28:01.978170       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:28:02.484061       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:28:31.983690       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:28:32.497582       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:29:01.990789       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:29:02.509903       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:29:31.997059       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:29:32.523405       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:30:02.003186       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:30:02.542975       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1] <==
	I0108 22:21:06.010127       1 server_others.go:72] "Using iptables proxy"
	I0108 22:21:06.038841       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.153"]
	I0108 22:21:06.122281       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0108 22:21:06.122362       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:21:06.122379       1 server_others.go:168] "Using iptables Proxier"
	I0108 22:21:06.127422       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:21:06.127928       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0108 22:21:06.127974       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:21:06.129078       1 config.go:188] "Starting service config controller"
	I0108 22:21:06.129132       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:21:06.129152       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:21:06.129156       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:21:06.129918       1 config.go:315] "Starting node config controller"
	I0108 22:21:06.129966       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:21:06.229688       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:21:06.229844       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:21:06.230121       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a] <==
	W0108 22:20:45.425939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:20:45.426066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:20:45.437032       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:20:45.437135       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 22:20:45.455370       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:20:45.455469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:20:45.458639       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:20:45.458812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 22:20:45.490033       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:20:45.490092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:20:45.557178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:20:45.557341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:20:45.582793       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:20:45.583035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 22:20:45.688888       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:20:45.689132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:20:45.725279       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:20:45.725509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:20:45.784380       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:20:45.784793       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:20:45.833256       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:20:45.833296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:20:45.875952       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:20:45.876028       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0108 22:20:47.847503       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:15:20 UTC, ends at Mon 2024-01-08 22:30:10 UTC. --
	Jan 08 22:27:33 no-preload-675668 kubelet[4249]: E0108 22:27:33.488464    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:27:47 no-preload-675668 kubelet[4249]: E0108 22:27:47.488075    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:27:48 no-preload-675668 kubelet[4249]: E0108 22:27:48.538907    4249 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:27:48 no-preload-675668 kubelet[4249]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:27:48 no-preload-675668 kubelet[4249]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:27:48 no-preload-675668 kubelet[4249]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:27:59 no-preload-675668 kubelet[4249]: E0108 22:27:59.488159    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:28:11 no-preload-675668 kubelet[4249]: E0108 22:28:11.488234    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:28:26 no-preload-675668 kubelet[4249]: E0108 22:28:26.489881    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:28:37 no-preload-675668 kubelet[4249]: E0108 22:28:37.488267    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:28:48 no-preload-675668 kubelet[4249]: E0108 22:28:48.489670    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:28:48 no-preload-675668 kubelet[4249]: E0108 22:28:48.539519    4249 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:28:48 no-preload-675668 kubelet[4249]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:28:48 no-preload-675668 kubelet[4249]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:28:48 no-preload-675668 kubelet[4249]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:29:00 no-preload-675668 kubelet[4249]: E0108 22:29:00.488679    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:29:13 no-preload-675668 kubelet[4249]: E0108 22:29:13.488562    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:29:26 no-preload-675668 kubelet[4249]: E0108 22:29:26.487933    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:29:39 no-preload-675668 kubelet[4249]: E0108 22:29:39.488020    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:29:48 no-preload-675668 kubelet[4249]: E0108 22:29:48.534597    4249 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:29:48 no-preload-675668 kubelet[4249]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:29:48 no-preload-675668 kubelet[4249]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:29:48 no-preload-675668 kubelet[4249]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:29:54 no-preload-675668 kubelet[4249]: E0108 22:29:54.488924    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:30:09 no-preload-675668 kubelet[4249]: E0108 22:30:09.490979    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	
	
	==> storage-provisioner [7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51] <==
	I0108 22:21:05.995495       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:21:06.011409       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:21:06.011555       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:21:06.032216       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:21:06.034595       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-675668_30d73bf1-3d01-4127-a18b-ef42b5387705!
	I0108 22:21:06.040335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6141c73c-6936-478d-9a5e-025b74c98f00", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-675668_30d73bf1-3d01-4127-a18b-ef42b5387705 became leader
	I0108 22:21:06.135808       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-675668_30d73bf1-3d01-4127-a18b-ef42b5387705!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675668 -n no-preload-675668
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-675668 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vb2kj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-675668 describe pod metrics-server-57f55c9bc5-vb2kj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-675668 describe pod metrics-server-57f55c9bc5-vb2kj: exit status 1 (79.170344ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vb2kj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-675668 describe pod metrics-server-57f55c9bc5-vb2kj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 22:26:08.015233  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-903819 -n embed-certs-903819
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-08 22:34:33.846903829 +0000 UTC m=+5549.548026179
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-903819 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-903819 logs -n 25: (1.993809216s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-523607                              | cert-expiration-523607       | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343954 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | disable-driver-mounts-343954                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:09 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079759        | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC | 08 Jan 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-675668             | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-903819            | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-292054  | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC | 08 Jan 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079759             | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-675668                  | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-903819                 | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-292054       | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:26 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:11:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:11:46.087099  375556 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:11:46.087257  375556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:46.087268  375556 out.go:309] Setting ErrFile to fd 2...
	I0108 22:11:46.087273  375556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:46.087523  375556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:11:46.088153  375556 out.go:303] Setting JSON to false
	I0108 22:11:46.089299  375556 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10432,"bootTime":1704741474,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:11:46.089374  375556 start.go:138] virtualization: kvm guest
	I0108 22:11:46.092180  375556 out.go:177] * [default-k8s-diff-port-292054] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:11:46.093649  375556 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:11:46.093727  375556 notify.go:220] Checking for updates...
	I0108 22:11:46.095251  375556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:11:46.097142  375556 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:11:46.099048  375556 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:11:46.100864  375556 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:11:46.102762  375556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:11:46.105085  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:11:46.105575  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:11:46.105654  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:11:46.122253  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0108 22:11:46.122758  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:11:46.123342  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:11:46.123412  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:11:46.123752  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:11:46.123910  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:11:46.124157  375556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:11:46.124499  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:11:46.124539  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:11:46.140751  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0108 22:11:46.141282  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:11:46.141773  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:11:46.141798  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:11:46.142141  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:11:46.142444  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:11:46.184643  375556 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 22:11:46.186001  375556 start.go:298] selected driver: kvm2
	I0108 22:11:46.186020  375556 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:11:46.186148  375556 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:11:46.186947  375556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:46.187023  375556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:11:46.203781  375556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:11:46.204243  375556 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:11:46.204341  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:11:46.204355  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:11:46.204368  375556 start_flags.go:321] config:
	{Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-29205
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:11:46.204574  375556 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:46.206922  375556 out.go:177] * Starting control plane node default-k8s-diff-port-292054 in cluster default-k8s-diff-port-292054
	I0108 22:11:49.059974  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:11:46.208771  375556 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:11:46.208837  375556 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:11:46.208846  375556 cache.go:56] Caching tarball of preloaded images
	I0108 22:11:46.208953  375556 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:11:46.208964  375556 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:11:46.209090  375556 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:11:46.209292  375556 start.go:365] acquiring machines lock for default-k8s-diff-port-292054: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:11:52.131718  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:11:58.211727  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:01.283728  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:07.363651  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:10.435843  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:16.515718  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:19.587893  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:25.667716  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:28.739741  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:34.819670  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:37.891747  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:43.971702  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:47.043706  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:53.123662  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:56.195726  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:02.275699  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:05.347708  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:11.427670  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:14.499733  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:20.579716  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:23.651809  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:29.731813  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:32.803834  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:38.883645  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:41.955722  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:48.035781  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:51.107833  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:57.187725  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:00.259743  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:06.339763  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:09.411776  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:15.491797  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:18.563880  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:24.643806  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:27.715717  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:33.795783  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:36.867725  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:42.947651  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:46.019719  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:52.099719  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:55.171662  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:01.251699  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:04.323666  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:07.328244  375205 start.go:369] acquired machines lock for "no-preload-675668" in 4m2.333038111s
	I0108 22:15:07.328384  375205 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:07.328398  375205 fix.go:54] fixHost starting: 
	I0108 22:15:07.328972  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:07.329012  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:07.346002  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0108 22:15:07.346606  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:07.347087  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:15:07.347112  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:07.347614  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:07.347816  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:07.347977  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:15:07.349843  375205 fix.go:102] recreateIfNeeded on no-preload-675668: state=Stopped err=<nil>
	I0108 22:15:07.349873  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	W0108 22:15:07.350055  375205 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:07.352092  375205 out.go:177] * Restarting existing kvm2 VM for "no-preload-675668" ...
	I0108 22:15:07.325708  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:07.325751  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:15:07.327981  374880 machine.go:91] provisioned docker machine in 4m37.376179376s
	I0108 22:15:07.328067  374880 fix.go:56] fixHost completed within 4m37.402208453s
	I0108 22:15:07.328080  374880 start.go:83] releasing machines lock for "old-k8s-version-079759", held for 4m37.402236557s
	W0108 22:15:07.328149  374880 start.go:694] error starting host: provision: host is not running
	W0108 22:15:07.328386  374880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 22:15:07.328401  374880 start.go:709] Will try again in 5 seconds ...
	I0108 22:15:07.353648  375205 main.go:141] libmachine: (no-preload-675668) Calling .Start
	I0108 22:15:07.353904  375205 main.go:141] libmachine: (no-preload-675668) Ensuring networks are active...
	I0108 22:15:07.354917  375205 main.go:141] libmachine: (no-preload-675668) Ensuring network default is active
	I0108 22:15:07.355390  375205 main.go:141] libmachine: (no-preload-675668) Ensuring network mk-no-preload-675668 is active
	I0108 22:15:07.355764  375205 main.go:141] libmachine: (no-preload-675668) Getting domain xml...
	I0108 22:15:07.356506  375205 main.go:141] libmachine: (no-preload-675668) Creating domain...
	I0108 22:15:08.673735  375205 main.go:141] libmachine: (no-preload-675668) Waiting to get IP...
	I0108 22:15:08.674861  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:08.675407  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:08.675502  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:08.675369  376073 retry.go:31] will retry after 298.445271ms: waiting for machine to come up
	I0108 22:15:08.976053  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:08.976594  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:08.976624  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:08.976525  376073 retry.go:31] will retry after 372.862343ms: waiting for machine to come up
	I0108 22:15:09.351338  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:09.351843  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:09.351864  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:09.351801  376073 retry.go:31] will retry after 463.145179ms: waiting for machine to come up
	I0108 22:15:09.816629  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:09.817035  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:09.817059  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:09.816979  376073 retry.go:31] will retry after 390.229237ms: waiting for machine to come up
	I0108 22:15:12.328668  374880 start.go:365] acquiring machines lock for old-k8s-version-079759: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:15:10.208639  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:10.209034  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:10.209068  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:10.208972  376073 retry.go:31] will retry after 547.133251ms: waiting for machine to come up
	I0108 22:15:10.758143  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:10.758742  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:10.758779  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:10.758673  376073 retry.go:31] will retry after 833.304996ms: waiting for machine to come up
	I0108 22:15:11.594018  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:11.594517  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:11.594551  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:11.594482  376073 retry.go:31] will retry after 1.155542967s: waiting for machine to come up
	I0108 22:15:12.751694  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:12.752196  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:12.752233  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:12.752162  376073 retry.go:31] will retry after 1.197873107s: waiting for machine to come up
	I0108 22:15:13.951593  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:13.952050  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:13.952072  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:13.952005  376073 retry.go:31] will retry after 1.257059014s: waiting for machine to come up
	I0108 22:15:15.211632  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:15.212133  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:15.212161  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:15.212090  376073 retry.go:31] will retry after 2.27321783s: waiting for machine to come up
	I0108 22:15:17.487177  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:17.487684  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:17.487712  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:17.487631  376073 retry.go:31] will retry after 2.218202362s: waiting for machine to come up
	I0108 22:15:19.709130  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:19.709618  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:19.709651  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:19.709552  376073 retry.go:31] will retry after 2.976711307s: waiting for machine to come up
	I0108 22:15:22.687741  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:22.688337  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:22.688373  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:22.688238  376073 retry.go:31] will retry after 4.028238242s: waiting for machine to come up
	I0108 22:15:28.088862  375293 start.go:369] acquired machines lock for "embed-certs-903819" in 4m15.164556555s
	I0108 22:15:28.088954  375293 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:28.088965  375293 fix.go:54] fixHost starting: 
	I0108 22:15:28.089472  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:28.089526  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:28.108636  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0108 22:15:28.109141  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:28.109765  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:15:28.109816  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:28.110214  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:28.110458  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:28.110642  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:15:28.112595  375293 fix.go:102] recreateIfNeeded on embed-certs-903819: state=Stopped err=<nil>
	I0108 22:15:28.112635  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	W0108 22:15:28.112883  375293 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:28.115226  375293 out.go:177] * Restarting existing kvm2 VM for "embed-certs-903819" ...
	I0108 22:15:26.721451  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.721880  375205 main.go:141] libmachine: (no-preload-675668) Found IP for machine: 192.168.61.153
	I0108 22:15:26.721905  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has current primary IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.721912  375205 main.go:141] libmachine: (no-preload-675668) Reserving static IP address...
	I0108 22:15:26.722449  375205 main.go:141] libmachine: (no-preload-675668) Reserved static IP address: 192.168.61.153
	I0108 22:15:26.722475  375205 main.go:141] libmachine: (no-preload-675668) Waiting for SSH to be available...
	I0108 22:15:26.722498  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "no-preload-675668", mac: "52:54:00:08:3b:59", ip: "192.168.61.153"} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.722528  375205 main.go:141] libmachine: (no-preload-675668) DBG | skip adding static IP to network mk-no-preload-675668 - found existing host DHCP lease matching {name: "no-preload-675668", mac: "52:54:00:08:3b:59", ip: "192.168.61.153"}
	I0108 22:15:26.722545  375205 main.go:141] libmachine: (no-preload-675668) DBG | Getting to WaitForSSH function...
	I0108 22:15:26.724512  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.724861  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.724898  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.725004  375205 main.go:141] libmachine: (no-preload-675668) DBG | Using SSH client type: external
	I0108 22:15:26.725078  375205 main.go:141] libmachine: (no-preload-675668) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa (-rw-------)
	I0108 22:15:26.725130  375205 main.go:141] libmachine: (no-preload-675668) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:15:26.725152  375205 main.go:141] libmachine: (no-preload-675668) DBG | About to run SSH command:
	I0108 22:15:26.725172  375205 main.go:141] libmachine: (no-preload-675668) DBG | exit 0
	I0108 22:15:26.815569  375205 main.go:141] libmachine: (no-preload-675668) DBG | SSH cmd err, output: <nil>: 
	I0108 22:15:26.816005  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetConfigRaw
	I0108 22:15:26.816711  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:26.819269  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.819636  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.819681  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.819964  375205 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/config.json ...
	I0108 22:15:26.820191  375205 machine.go:88] provisioning docker machine ...
	I0108 22:15:26.820215  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:26.820446  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:26.820626  375205 buildroot.go:166] provisioning hostname "no-preload-675668"
	I0108 22:15:26.820648  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:26.820790  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:26.823021  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.823390  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.823421  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.823567  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:26.823781  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.823943  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.824103  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:26.824331  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:26.824924  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:26.824958  375205 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-675668 && echo "no-preload-675668" | sudo tee /etc/hostname
	I0108 22:15:26.960664  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-675668
	
	I0108 22:15:26.960713  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:26.964110  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.964397  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.964437  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.964605  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:26.964918  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.965153  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.965334  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:26.965543  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:26.965958  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:26.965985  375205 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-675668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-675668/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-675668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:15:27.102584  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:27.102632  375205 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:15:27.102663  375205 buildroot.go:174] setting up certificates
	I0108 22:15:27.102678  375205 provision.go:83] configureAuth start
	I0108 22:15:27.102688  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:27.103024  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:27.105986  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.106379  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.106400  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.106586  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.108670  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.109003  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.109029  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.109216  375205 provision.go:138] copyHostCerts
	I0108 22:15:27.109300  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:15:27.109320  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:15:27.109426  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:15:27.109561  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:15:27.109571  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:15:27.109599  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:15:27.109663  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:15:27.109670  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:15:27.109691  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:15:27.109751  375205 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.no-preload-675668 san=[192.168.61.153 192.168.61.153 localhost 127.0.0.1 minikube no-preload-675668]
	I0108 22:15:27.297801  375205 provision.go:172] copyRemoteCerts
	I0108 22:15:27.297888  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:15:27.297915  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.301050  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.301503  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.301545  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.301737  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.301955  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.302121  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.302265  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:27.394076  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:15:27.420873  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:15:27.446852  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:15:27.475352  375205 provision.go:86] duration metric: configureAuth took 372.6598ms
	I0108 22:15:27.475406  375205 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:15:27.475661  375205 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:15:27.475793  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.478557  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.478872  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.478906  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.479091  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.479354  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.479579  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.479768  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.479939  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:27.480273  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:27.480291  375205 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:15:27.822802  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:15:27.822834  375205 machine.go:91] provisioned docker machine in 1.002628424s
	I0108 22:15:27.822845  375205 start.go:300] post-start starting for "no-preload-675668" (driver="kvm2")
	I0108 22:15:27.822858  375205 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:15:27.822874  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:27.823282  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:15:27.823320  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.825948  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.826276  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.826298  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.826407  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.826597  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.826793  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.826922  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:27.918118  375205 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:15:27.922998  375205 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:15:27.923044  375205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:15:27.923151  375205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:15:27.923275  375205 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:15:27.923407  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:15:27.933715  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:27.960061  375205 start.go:303] post-start completed in 137.19795ms
	I0108 22:15:27.960109  375205 fix.go:56] fixHost completed within 20.631710493s
	I0108 22:15:27.960137  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.963254  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.963656  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.963688  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.964017  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.964325  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.964533  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.964722  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.964945  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:27.965301  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:27.965314  375205 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:15:28.088665  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752128.028688224
	
	I0108 22:15:28.088696  375205 fix.go:206] guest clock: 1704752128.028688224
	I0108 22:15:28.088706  375205 fix.go:219] Guest: 2024-01-08 22:15:28.028688224 +0000 UTC Remote: 2024-01-08 22:15:27.960113957 +0000 UTC m=+263.145626296 (delta=68.574267ms)
	I0108 22:15:28.088734  375205 fix.go:190] guest clock delta is within tolerance: 68.574267ms
	I0108 22:15:28.088742  375205 start.go:83] releasing machines lock for "no-preload-675668", held for 20.760456272s
	I0108 22:15:28.088775  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.089136  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:28.091887  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.092255  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.092274  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.092537  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093187  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093416  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093504  375205 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:15:28.093546  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:28.093722  375205 ssh_runner.go:195] Run: cat /version.json
	I0108 22:15:28.093769  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:28.096920  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.096969  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097385  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.097428  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097460  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.097482  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097739  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:28.097767  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:28.098020  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:28.098074  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:28.098243  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:28.098254  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:28.098459  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:28.098460  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:28.221319  375205 ssh_runner.go:195] Run: systemctl --version
	I0108 22:15:28.227501  375205 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:15:28.379259  375205 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:15:28.386159  375205 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:15:28.386272  375205 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:15:28.404416  375205 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:15:28.404469  375205 start.go:475] detecting cgroup driver to use...
	I0108 22:15:28.404575  375205 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:15:28.421612  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:15:28.438920  375205 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:15:28.439001  375205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:15:28.455220  375205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:15:28.473982  375205 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:15:28.610132  375205 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:15:28.735485  375205 docker.go:219] disabling docker service ...
	I0108 22:15:28.735627  375205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:15:28.750327  375205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:15:28.768782  375205 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:15:28.891784  375205 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:15:29.006680  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:15:29.023187  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:15:29.043520  375205 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:15:29.043601  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.056442  375205 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:15:29.056525  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.066874  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.077969  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.090310  375205 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:15:29.102253  375205 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:15:29.114920  375205 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:15:29.115022  375205 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:15:29.131677  375205 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:15:29.142326  375205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:15:29.259562  375205 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:15:29.463482  375205 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:15:29.463554  375205 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:15:29.468579  375205 start.go:543] Will wait 60s for crictl version
	I0108 22:15:29.468665  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:29.476630  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:15:29.525900  375205 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:15:29.526053  375205 ssh_runner.go:195] Run: crio --version
	I0108 22:15:29.579948  375205 ssh_runner.go:195] Run: crio --version
	I0108 22:15:29.632573  375205 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0108 22:15:29.634161  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:29.637972  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:29.638472  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:29.638514  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:29.638828  375205 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0108 22:15:29.644170  375205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:29.658242  375205 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:15:29.658302  375205 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:29.701366  375205 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0108 22:15:29.701422  375205 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:15:29.701626  375205 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0108 22:15:29.701685  375205 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.701583  375205 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.701743  375205 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.701674  375205 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.701597  375205 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:29.701743  375205 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.701582  375205 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.703644  375205 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:29.703679  375205 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.703705  375205 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0108 22:15:29.703722  375205 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.703643  375205 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.703651  375205 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.703655  375205 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.703652  375205 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:28.117212  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Start
	I0108 22:15:28.117480  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring networks are active...
	I0108 22:15:28.118363  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring network default is active
	I0108 22:15:28.118783  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring network mk-embed-certs-903819 is active
	I0108 22:15:28.119425  375293 main.go:141] libmachine: (embed-certs-903819) Getting domain xml...
	I0108 22:15:28.120203  375293 main.go:141] libmachine: (embed-certs-903819) Creating domain...
	I0108 22:15:29.474037  375293 main.go:141] libmachine: (embed-certs-903819) Waiting to get IP...
	I0108 22:15:29.475109  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:29.475735  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:29.475862  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:29.475696  376188 retry.go:31] will retry after 284.136631ms: waiting for machine to come up
	I0108 22:15:29.762077  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:29.762586  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:29.762614  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:29.762538  376188 retry.go:31] will retry after 303.052805ms: waiting for machine to come up
	I0108 22:15:30.067299  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:30.067947  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:30.067997  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:30.067822  376188 retry.go:31] will retry after 471.679894ms: waiting for machine to come up
	I0108 22:15:30.541942  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:30.542626  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:30.542658  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:30.542542  376188 retry.go:31] will retry after 534.448155ms: waiting for machine to come up
	I0108 22:15:31.078549  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:31.079168  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:31.079212  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:31.079092  376188 retry.go:31] will retry after 595.348277ms: waiting for machine to come up
	I0108 22:15:31.675832  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:31.676249  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:31.676278  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:31.676209  376188 retry.go:31] will retry after 618.587146ms: waiting for machine to come up
	I0108 22:15:32.296396  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:32.296982  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:32.297011  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:32.296820  376188 retry.go:31] will retry after 730.322233ms: waiting for machine to come up
	I0108 22:15:29.877942  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.891002  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.891714  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.893908  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0108 22:15:29.901880  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.959729  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.975241  375205 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0108 22:15:29.975301  375205 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.975308  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.975351  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.022214  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.074289  375205 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0108 22:15:30.074350  375205 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:30.074422  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.107460  375205 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0108 22:15:30.107547  375205 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:30.107634  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.137086  375205 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0108 22:15:30.137155  375205 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:30.137227  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.156198  375205 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0108 22:15:30.156291  375205 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:30.156357  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163468  375205 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0108 22:15:30.163522  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:30.163532  375205 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:30.163563  375205 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0108 22:15:30.163616  375205 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.163654  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:30.163660  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163762  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:30.163779  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:30.163583  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163849  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:30.304360  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:30.304458  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0108 22:15:30.304478  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0108 22:15:30.304481  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:30.304564  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.304603  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.304568  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:30.304636  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:30.304678  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:30.304738  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:30.307415  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:30.307516  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:30.322465  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0108 22:15:30.322505  375205 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.322616  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.323275  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390462  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0108 22:15:30.390530  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0108 22:15:30.390546  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 22:15:30.390566  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390612  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390651  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:30.390657  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:32.649486  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.326834963s)
	I0108 22:15:32.649532  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0108 22:15:32.649560  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:32.649569  375205 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.258890537s)
	I0108 22:15:32.649612  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:32.649622  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0108 22:15:32.649573  375205 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.258898806s)
	I0108 22:15:32.649638  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0108 22:15:33.028658  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:33.029086  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:33.029117  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:33.029023  376188 retry.go:31] will retry after 1.009306133s: waiting for machine to come up
	I0108 22:15:34.040145  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:34.040574  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:34.040610  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:34.040517  376188 retry.go:31] will retry after 1.215287271s: waiting for machine to come up
	I0108 22:15:35.258130  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:35.258735  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:35.258767  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:35.258669  376188 retry.go:31] will retry after 1.604579686s: waiting for machine to come up
	I0108 22:15:36.865156  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:36.865635  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:36.865671  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:36.865575  376188 retry.go:31] will retry after 1.938816817s: waiting for machine to come up
	I0108 22:15:35.937824  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.288173217s)
	I0108 22:15:35.937859  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0108 22:15:35.937899  375205 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:35.938005  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:38.805792  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:38.806390  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:38.806420  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:38.806318  376188 retry.go:31] will retry after 2.933374936s: waiting for machine to come up
	I0108 22:15:41.741267  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:41.741924  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:41.741962  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:41.741850  376188 retry.go:31] will retry after 3.549554778s: waiting for machine to come up
	I0108 22:15:40.512566  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.574525189s)
	I0108 22:15:40.512605  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0108 22:15:40.512642  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:40.512699  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:43.180687  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.667951486s)
	I0108 22:15:43.180730  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0108 22:15:43.180766  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:43.180849  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:44.539187  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.35830707s)
	I0108 22:15:44.539234  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0108 22:15:44.539274  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:44.539335  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:45.294867  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:45.295522  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:45.295572  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:45.295439  376188 retry.go:31] will retry after 5.642834673s: waiting for machine to come up
	I0108 22:15:46.498360  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.95899411s)
	I0108 22:15:46.498392  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0108 22:15:46.498417  375205 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:46.498473  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:47.553626  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.055107765s)
	I0108 22:15:47.553672  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0108 22:15:47.553708  375205 cache_images.go:123] Successfully loaded all cached images
	I0108 22:15:47.553715  375205 cache_images.go:92] LoadImages completed in 17.852269213s
	I0108 22:15:47.553796  375205 ssh_runner.go:195] Run: crio config
	I0108 22:15:47.626385  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:15:47.626428  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:15:47.626471  375205 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:15:47.626503  375205 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.153 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-675668 NodeName:no-preload-675668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:15:47.626764  375205 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-675668"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:15:47.626889  375205 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-675668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-675668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:15:47.626994  375205 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0108 22:15:47.638161  375205 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:15:47.638263  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:15:47.648004  375205 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0108 22:15:47.667877  375205 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0108 22:15:47.685914  375205 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0108 22:15:47.705814  375205 ssh_runner.go:195] Run: grep 192.168.61.153	control-plane.minikube.internal$ /etc/hosts
	I0108 22:15:47.709842  375205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:47.724788  375205 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668 for IP: 192.168.61.153
	I0108 22:15:47.724877  375205 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:15:47.725349  375205 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:15:47.725420  375205 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:15:47.725541  375205 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.key
	I0108 22:15:47.725626  375205 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.key.0768d075
	I0108 22:15:47.725668  375205 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.key
	I0108 22:15:47.725793  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:15:47.725822  375205 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:15:47.725837  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:15:47.725861  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:15:47.725886  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:15:47.725908  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:15:47.725952  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:47.727130  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:15:47.753432  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:15:47.780962  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:15:47.807446  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:15:47.834334  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:15:47.861638  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:15:47.889479  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:15:47.916119  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:15:47.944635  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:15:47.971740  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:15:47.998594  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:15:48.025907  375205 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:15:48.044525  375205 ssh_runner.go:195] Run: openssl version
	I0108 22:15:48.050542  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:15:48.061205  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.066945  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.067060  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.074266  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:15:48.084613  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:15:48.095856  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.101596  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.101677  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.108991  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:15:48.120690  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:15:48.130747  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.135480  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.135576  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.141462  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:15:48.152597  375205 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:15:48.158657  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:15:48.165978  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:15:48.174164  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:15:48.181140  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:15:48.187819  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:15:48.194088  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:15:48.200487  375205 kubeadm.go:404] StartCluster: {Name:no-preload-675668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-675668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.153 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:15:48.200612  375205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:15:48.200686  375205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:15:48.244804  375205 cri.go:89] found id: ""
	I0108 22:15:48.244894  375205 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:15:48.255502  375205 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:15:48.255549  375205 kubeadm.go:636] restartCluster start
	I0108 22:15:48.255625  375205 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:15:48.265914  375205 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:48.267815  375205 kubeconfig.go:92] found "no-preload-675668" server: "https://192.168.61.153:8443"
	I0108 22:15:48.271555  375205 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:15:48.281619  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:48.281694  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:48.293360  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:48.781917  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:48.782063  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:48.795101  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:49.281683  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:49.281784  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:49.295392  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:49.781910  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:49.782011  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:49.795016  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.309259  375556 start.go:369] acquired machines lock for "default-k8s-diff-port-292054" in 4m6.099929885s
	I0108 22:15:52.309332  375556 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:52.309353  375556 fix.go:54] fixHost starting: 
	I0108 22:15:52.309795  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:52.309827  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:52.327510  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
	I0108 22:15:52.328130  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:52.328844  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:15:52.328877  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:52.329458  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:52.329740  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:15:52.329938  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:15:52.331851  375556 fix.go:102] recreateIfNeeded on default-k8s-diff-port-292054: state=Stopped err=<nil>
	I0108 22:15:52.331887  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	W0108 22:15:52.332071  375556 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:52.334604  375556 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-292054" ...
	I0108 22:15:50.942498  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.943038  375293 main.go:141] libmachine: (embed-certs-903819) Found IP for machine: 192.168.72.132
	I0108 22:15:50.943076  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has current primary IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.943087  375293 main.go:141] libmachine: (embed-certs-903819) Reserving static IP address...
	I0108 22:15:50.943577  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "embed-certs-903819", mac: "52:54:00:73:74:da", ip: "192.168.72.132"} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:50.943606  375293 main.go:141] libmachine: (embed-certs-903819) Reserved static IP address: 192.168.72.132
	I0108 22:15:50.943620  375293 main.go:141] libmachine: (embed-certs-903819) DBG | skip adding static IP to network mk-embed-certs-903819 - found existing host DHCP lease matching {name: "embed-certs-903819", mac: "52:54:00:73:74:da", ip: "192.168.72.132"}
	I0108 22:15:50.943636  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Getting to WaitForSSH function...
	I0108 22:15:50.943655  375293 main.go:141] libmachine: (embed-certs-903819) Waiting for SSH to be available...
	I0108 22:15:50.945879  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.946330  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:50.946362  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.946493  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Using SSH client type: external
	I0108 22:15:50.946532  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa (-rw-------)
	I0108 22:15:50.946589  375293 main.go:141] libmachine: (embed-certs-903819) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:15:50.946606  375293 main.go:141] libmachine: (embed-certs-903819) DBG | About to run SSH command:
	I0108 22:15:50.946641  375293 main.go:141] libmachine: (embed-certs-903819) DBG | exit 0
	I0108 22:15:51.051155  375293 main.go:141] libmachine: (embed-certs-903819) DBG | SSH cmd err, output: <nil>: 
	I0108 22:15:51.051655  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetConfigRaw
	I0108 22:15:51.052363  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:51.054890  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.055247  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.055276  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.055618  375293 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/config.json ...
	I0108 22:15:51.055862  375293 machine.go:88] provisioning docker machine ...
	I0108 22:15:51.055887  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:51.056117  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.056263  375293 buildroot.go:166] provisioning hostname "embed-certs-903819"
	I0108 22:15:51.056283  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.056427  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.058406  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.058775  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.058822  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.058953  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.059154  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.059318  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.059478  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.059654  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.060145  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.060166  375293 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-903819 && echo "embed-certs-903819" | sudo tee /etc/hostname
	I0108 22:15:51.207967  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-903819
	
	I0108 22:15:51.208007  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.210549  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.210848  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.210876  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.211120  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.211372  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.211539  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.211707  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.211879  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.212375  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.212399  375293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-903819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-903819/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-903819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:15:51.356887  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:51.356936  375293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:15:51.356968  375293 buildroot.go:174] setting up certificates
	I0108 22:15:51.356997  375293 provision.go:83] configureAuth start
	I0108 22:15:51.357012  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.357424  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:51.360156  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.360553  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.360590  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.360735  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.363438  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.363850  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.363905  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.364020  375293 provision.go:138] copyHostCerts
	I0108 22:15:51.364111  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:15:51.364126  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:15:51.364193  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:15:51.364332  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:15:51.364347  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:15:51.364376  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:15:51.364453  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:15:51.364463  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:15:51.364490  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:15:51.364552  375293 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.embed-certs-903819 san=[192.168.72.132 192.168.72.132 localhost 127.0.0.1 minikube embed-certs-903819]
	I0108 22:15:51.472949  375293 provision.go:172] copyRemoteCerts
	I0108 22:15:51.473023  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:15:51.473053  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.476622  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.476975  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.476997  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.477269  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.477524  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.477719  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.477852  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:51.576283  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:15:51.604809  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:15:51.633353  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:15:51.660375  375293 provision.go:86] duration metric: configureAuth took 303.352585ms
	I0108 22:15:51.660422  375293 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:15:51.660657  375293 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:15:51.660764  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.664337  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.664738  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.664796  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.665089  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.665394  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.665649  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.665823  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.666047  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.666595  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.666633  375293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:15:52.023397  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:15:52.023450  375293 machine.go:91] provisioned docker machine in 967.568803ms
	I0108 22:15:52.023469  375293 start.go:300] post-start starting for "embed-certs-903819" (driver="kvm2")
	I0108 22:15:52.023485  375293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:15:52.023514  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.023922  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:15:52.023979  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.026998  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.027417  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.027447  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.027665  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.027875  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.028050  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.028240  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.126087  375293 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:15:52.130371  375293 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:15:52.130414  375293 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:15:52.130509  375293 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:15:52.130609  375293 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:15:52.130738  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:15:52.139897  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:52.166648  375293 start.go:303] post-start completed in 143.156785ms
	I0108 22:15:52.166691  375293 fix.go:56] fixHost completed within 24.077726567s
	I0108 22:15:52.166721  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.169452  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.169849  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.169880  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.170156  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.170463  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.170716  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.170909  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.171089  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:52.171520  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:52.171535  375293 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:15:52.309104  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752152.251541184
	
	I0108 22:15:52.309136  375293 fix.go:206] guest clock: 1704752152.251541184
	I0108 22:15:52.309146  375293 fix.go:219] Guest: 2024-01-08 22:15:52.251541184 +0000 UTC Remote: 2024-01-08 22:15:52.166696501 +0000 UTC m=+279.417512277 (delta=84.844683ms)
	I0108 22:15:52.309173  375293 fix.go:190] guest clock delta is within tolerance: 84.844683ms
	I0108 22:15:52.309180  375293 start.go:83] releasing machines lock for "embed-certs-903819", held for 24.220254192s
	I0108 22:15:52.309214  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.309514  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:52.312538  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.312905  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.312928  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.313161  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313692  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313879  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313971  375293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:15:52.314031  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.314154  375293 ssh_runner.go:195] Run: cat /version.json
	I0108 22:15:52.314185  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.316938  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317214  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317363  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.317398  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.317425  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317456  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317599  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.317746  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.317803  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.317882  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.318074  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.318074  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.318273  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.318332  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.451292  375293 ssh_runner.go:195] Run: systemctl --version
	I0108 22:15:52.459839  375293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:15:52.609989  375293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:15:52.617215  375293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:15:52.617326  375293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:15:52.633017  375293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:15:52.633068  375293 start.go:475] detecting cgroup driver to use...
	I0108 22:15:52.633180  375293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:15:52.649947  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:15:52.664459  375293 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:15:52.664530  375293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:15:52.680105  375293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:15:52.696100  375293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:15:52.814616  375293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:15:52.951975  375293 docker.go:219] disabling docker service ...
	I0108 22:15:52.952086  375293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:15:52.967800  375293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:15:52.982903  375293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:15:53.107033  375293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:15:53.222765  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:15:53.238572  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:15:53.260919  375293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:15:53.261035  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.271980  375293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:15:53.272084  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.283693  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.298686  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.310543  375293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:15:53.322108  375293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:15:53.331904  375293 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:15:53.331982  375293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:15:53.347091  375293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:15:53.358365  375293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:15:53.462607  375293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:15:53.658267  375293 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:15:53.658362  375293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:15:53.663859  375293 start.go:543] Will wait 60s for crictl version
	I0108 22:15:53.663941  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:15:53.668413  375293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:15:53.714319  375293 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:15:53.714456  375293 ssh_runner.go:195] Run: crio --version
	I0108 22:15:53.774601  375293 ssh_runner.go:195] Run: crio --version
	I0108 22:15:53.840055  375293 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:15:50.282005  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:50.282118  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:50.296034  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:50.781676  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:50.781865  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:50.794250  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:51.281771  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:51.281866  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:51.296593  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:51.782094  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:51.782193  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:51.797110  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.281711  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:52.281844  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:52.294916  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.782076  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:52.782193  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:52.796700  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:53.282191  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:53.282320  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:53.300226  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:53.781708  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:53.781807  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:53.794426  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:54.281901  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:54.282005  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:54.305276  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:54.781646  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:54.781765  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:54.798991  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.336203  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Start
	I0108 22:15:52.336440  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring networks are active...
	I0108 22:15:52.337318  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring network default is active
	I0108 22:15:52.337660  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring network mk-default-k8s-diff-port-292054 is active
	I0108 22:15:52.338019  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Getting domain xml...
	I0108 22:15:52.338689  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Creating domain...
	I0108 22:15:53.715046  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting to get IP...
	I0108 22:15:53.716237  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.716849  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.716944  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:53.716801  376345 retry.go:31] will retry after 252.013763ms: waiting for machine to come up
	I0108 22:15:53.970408  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.971019  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.971049  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:53.970958  376345 retry.go:31] will retry after 266.473735ms: waiting for machine to come up
	I0108 22:15:54.239763  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.240226  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.240251  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:54.240173  376345 retry.go:31] will retry after 429.57645ms: waiting for machine to come up
	I0108 22:15:54.672202  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.672716  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.672752  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:54.672669  376345 retry.go:31] will retry after 585.093805ms: waiting for machine to come up
	I0108 22:15:55.259153  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.259706  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.259743  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:55.259654  376345 retry.go:31] will retry after 689.434093ms: waiting for machine to come up
	I0108 22:15:55.950610  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.951205  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.951239  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:55.951157  376345 retry.go:31] will retry after 895.874654ms: waiting for machine to come up
	I0108 22:15:53.841949  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:53.845797  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:53.846200  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:53.846248  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:53.846494  375293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0108 22:15:53.851791  375293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:53.866130  375293 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:15:53.866207  375293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:53.932186  375293 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:15:53.932311  375293 ssh_runner.go:195] Run: which lz4
	I0108 22:15:53.937259  375293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:15:53.944022  375293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:15:53.944077  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:15:55.993976  375293 crio.go:444] Took 2.056742 seconds to copy over tarball
	I0108 22:15:55.994073  375293 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:15:55.281653  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:55.281788  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:55.303179  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:55.781655  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:55.781803  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:55.801287  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:56.281804  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:56.281897  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:56.306479  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:56.782123  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:56.782248  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:56.799241  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:57.281778  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:57.281926  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:57.299917  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:57.782255  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:57.782392  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:57.797960  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:58.282738  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:58.282919  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:58.300271  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:58.300333  375205 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:15:58.300349  375205 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:15:58.300365  375205 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:15:58.300452  375205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:15:58.353658  375205 cri.go:89] found id: ""
	I0108 22:15:58.353755  375205 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:15:58.372503  375205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:15:58.393266  375205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:15:58.393366  375205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:15:58.406210  375205 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:15:58.406255  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:58.570457  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:59.811449  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.240942109s)
	I0108 22:15:59.811494  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:56.848455  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:56.848893  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:56.848925  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:56.848869  376345 retry.go:31] will retry after 1.095460706s: waiting for machine to come up
	I0108 22:15:57.946534  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:57.947045  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:57.947084  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:57.947000  376345 retry.go:31] will retry after 975.046116ms: waiting for machine to come up
	I0108 22:15:58.923872  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:58.924402  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:58.924436  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:58.924351  376345 retry.go:31] will retry after 1.855498831s: waiting for machine to come up
	I0108 22:16:00.781295  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:00.781813  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:00.781842  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:00.781745  376345 retry.go:31] will retry after 1.560909915s: waiting for machine to come up
	I0108 22:15:59.648230  375293 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.654100182s)
	I0108 22:15:59.648275  375293 crio.go:451] Took 3.654264 seconds to extract the tarball
	I0108 22:15:59.648293  375293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:15:59.707614  375293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:59.763291  375293 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:15:59.763318  375293 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:15:59.763416  375293 ssh_runner.go:195] Run: crio config
	I0108 22:15:59.840951  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:15:59.840986  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:15:59.841015  375293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:15:59.841038  375293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-903819 NodeName:embed-certs-903819 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:15:59.841205  375293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-903819"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:15:59.841283  375293 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-903819 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-903819 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:15:59.841341  375293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:15:59.854399  375293 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:15:59.854521  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:15:59.864630  375293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0108 22:15:59.887590  375293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:15:59.907618  375293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0108 22:15:59.930429  375293 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I0108 22:15:59.935347  375293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:59.954840  375293 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819 for IP: 192.168.72.132
	I0108 22:15:59.954893  375293 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:15:59.955092  375293 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:15:59.955151  375293 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:15:59.955277  375293 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/client.key
	I0108 22:15:59.955460  375293 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.key.b7fe571d
	I0108 22:15:59.955557  375293 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.key
	I0108 22:15:59.955780  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:15:59.955832  375293 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:15:59.955855  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:15:59.955897  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:15:59.955931  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:15:59.955962  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:15:59.956023  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:59.957003  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:15:59.984268  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:16:00.018065  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:00.049758  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:00.079731  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:00.115904  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:00.148655  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:00.186204  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:00.224356  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:00.258906  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:00.293420  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:00.328219  375293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:00.351811  375293 ssh_runner.go:195] Run: openssl version
	I0108 22:16:00.360327  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:00.373384  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.381553  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.381653  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.391609  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:00.406242  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:00.419455  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.426093  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.426218  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.433793  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:00.446550  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:00.463174  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.470386  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.470471  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.477752  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:00.492003  375293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:00.498273  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:00.506305  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:00.515120  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:00.523909  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:00.531966  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:00.540080  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:00.547673  375293 kubeadm.go:404] StartCluster: {Name:embed-certs-903819 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-903819 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:00.547852  375293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:00.547933  375293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:00.596555  375293 cri.go:89] found id: ""
	I0108 22:16:00.596644  375293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:00.607985  375293 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:00.608023  375293 kubeadm.go:636] restartCluster start
	I0108 22:16:00.608092  375293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:00.620669  375293 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:00.621860  375293 kubeconfig.go:92] found "embed-certs-903819" server: "https://192.168.72.132:8443"
	I0108 22:16:00.624246  375293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:00.638481  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:00.638578  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:00.658261  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:01.138670  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:01.138876  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:01.154778  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:01.639152  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:01.639290  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:01.659301  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:02.138679  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:02.138871  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:02.159427  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:02.638859  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:02.638970  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:02.660608  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:00.076906  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:00.244500  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:00.356164  375205 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:00.356290  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:00.856674  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:01.356420  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:01.857416  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:02.356778  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:02.857385  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:03.356493  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:03.379896  375205 api_server.go:72] duration metric: took 3.023730091s to wait for apiserver process to appear ...
	I0108 22:16:03.379953  375205 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:03.380023  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:02.344786  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:02.345408  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:02.345444  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:02.345339  376345 retry.go:31] will retry after 2.336202352s: waiting for machine to come up
	I0108 22:16:04.685192  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:04.685894  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:04.685947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:04.685809  376345 retry.go:31] will retry after 3.559467663s: waiting for machine to come up
	I0108 22:16:03.139113  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:03.139272  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:03.158043  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:03.638583  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:03.638737  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:03.659573  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:04.139075  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:04.139225  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:04.158993  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:04.638600  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:04.638766  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:04.657099  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:05.138627  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:05.138728  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:05.156654  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:05.639289  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:05.639436  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:05.658060  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:06.139303  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:06.139466  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:06.153866  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:06.638492  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:06.638651  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:06.656088  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.138685  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:07.138840  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:07.158365  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.638744  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:07.638838  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:07.656010  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.463229  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:07.463273  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:07.463299  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:07.534774  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:07.534812  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:07.880243  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:07.886835  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:07.886881  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:08.380688  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:08.385776  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:08.385821  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:08.880979  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:08.890142  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:08.890180  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:09.380526  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:09.385856  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 200:
	ok
	I0108 22:16:09.394800  375205 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:16:09.394838  375205 api_server.go:131] duration metric: took 6.014875532s to wait for apiserver health ...
	I0108 22:16:09.394851  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:16:09.394861  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:09.396785  375205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:09.398197  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:09.422683  375205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:09.464557  375205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:09.483416  375205 system_pods.go:59] 8 kube-system pods found
	I0108 22:16:09.483460  375205 system_pods.go:61] "coredns-76f75df574-v8fsw" [7d69f8ec-6684-49d0-8567-4032298a4e5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:09.483471  375205 system_pods.go:61] "etcd-no-preload-675668" [bc088c6e-5037-4e51-a021-2c5ac3c1c60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:09.483488  375205 system_pods.go:61] "kube-apiserver-no-preload-675668" [0bbdf118-c47c-4298-ae5e-a984729ec21e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:09.483497  375205 system_pods.go:61] "kube-controller-manager-no-preload-675668" [2c3ff259-60a7-4205-b55f-85fe2d8e340d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:09.483513  375205 system_pods.go:61] "kube-proxy-dnbvk" [1803ec6b-5bd3-4ebb-bfd5-3a1356a1f168] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:09.483522  375205 system_pods.go:61] "kube-scheduler-no-preload-675668" [47737c5e-b59a-4df0-ac7c-36525e17733c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:09.483532  375205 system_pods.go:61] "metrics-server-57f55c9bc5-pk8bm" [71c7c744-c5fa-41e7-a26f-c04c30379b97] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:09.483537  375205 system_pods.go:61] "storage-provisioner" [1266430c-beda-4fa1-a057-cb07b8bf1f89] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:09.483547  375205 system_pods.go:74] duration metric: took 18.952011ms to wait for pod list to return data ...
	I0108 22:16:09.483562  375205 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:09.502939  375205 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:09.502989  375205 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:09.503007  375205 node_conditions.go:105] duration metric: took 19.439582ms to run NodePressure ...
	I0108 22:16:09.503031  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:08.246675  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:08.247243  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:08.247302  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:08.247185  376345 retry.go:31] will retry after 3.860632675s: waiting for machine to come up
	I0108 22:16:08.139286  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:08.139413  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:08.155694  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:08.639385  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:08.639521  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:08.655368  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:09.139022  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:09.139171  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:09.153512  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:09.638642  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:09.638765  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:09.653202  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.138833  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:10.138924  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:10.153529  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.639273  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:10.639462  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:10.655947  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.655981  375293 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:10.655991  375293 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:10.656003  375293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:10.656082  375293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:10.706638  375293 cri.go:89] found id: ""
	I0108 22:16:10.706721  375293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:10.726540  375293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:10.739540  375293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:10.739619  375293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:10.751112  375293 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:10.751158  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:10.877306  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.453755  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.664034  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.778440  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.866216  375293 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:11.866364  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:12.366749  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.862826  374880 start.go:369] acquired machines lock for "old-k8s-version-079759" in 1m1.534060396s
	I0108 22:16:13.862908  374880 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:16:13.862922  374880 fix.go:54] fixHost starting: 
	I0108 22:16:13.863465  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:16:13.863514  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:16:13.890658  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0108 22:16:13.891256  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:16:13.891974  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:16:13.891997  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:16:13.892356  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:16:13.892526  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:13.892634  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:16:13.894503  374880 fix.go:102] recreateIfNeeded on old-k8s-version-079759: state=Stopped err=<nil>
	I0108 22:16:13.894532  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	W0108 22:16:13.894707  374880 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:16:13.896778  374880 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-079759" ...
	I0108 22:16:13.898346  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Start
	I0108 22:16:13.898517  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring networks are active...
	I0108 22:16:13.899441  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring network default is active
	I0108 22:16:13.899906  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring network mk-old-k8s-version-079759 is active
	I0108 22:16:13.900424  374880 main.go:141] libmachine: (old-k8s-version-079759) Getting domain xml...
	I0108 22:16:13.901232  374880 main.go:141] libmachine: (old-k8s-version-079759) Creating domain...
	I0108 22:16:10.069721  375205 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:10.077465  375205 kubeadm.go:787] kubelet initialised
	I0108 22:16:10.077494  375205 kubeadm.go:788] duration metric: took 7.739231ms waiting for restarted kubelet to initialise ...
	I0108 22:16:10.077503  375205 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:10.085099  375205 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-v8fsw" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:12.095498  375205 pod_ready.go:102] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:14.100054  375205 pod_ready.go:102] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:12.111578  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.112089  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Found IP for machine: 192.168.50.18
	I0108 22:16:12.112118  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Reserving static IP address...
	I0108 22:16:12.112138  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has current primary IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.112627  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-292054", mac: "52:54:00:8d:25:78", ip: "192.168.50.18"} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.112660  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Reserved static IP address: 192.168.50.18
	I0108 22:16:12.112684  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | skip adding static IP to network mk-default-k8s-diff-port-292054 - found existing host DHCP lease matching {name: "default-k8s-diff-port-292054", mac: "52:54:00:8d:25:78", ip: "192.168.50.18"}
	I0108 22:16:12.112706  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Getting to WaitForSSH function...
	I0108 22:16:12.112729  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for SSH to be available...
	I0108 22:16:12.115245  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.115723  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.115762  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.115881  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Using SSH client type: external
	I0108 22:16:12.115917  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa (-rw-------)
	I0108 22:16:12.115947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:16:12.115967  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | About to run SSH command:
	I0108 22:16:12.116013  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | exit 0
	I0108 22:16:12.221209  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | SSH cmd err, output: <nil>: 
	I0108 22:16:12.221755  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetConfigRaw
	I0108 22:16:12.222634  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:12.225565  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.226008  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.226036  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.226326  375556 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:16:12.226626  375556 machine.go:88] provisioning docker machine ...
	I0108 22:16:12.226658  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:12.226946  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.227160  375556 buildroot.go:166] provisioning hostname "default-k8s-diff-port-292054"
	I0108 22:16:12.227187  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.227381  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.230424  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.230867  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.230918  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.231036  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.231302  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.231511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.231674  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.231856  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:12.232448  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:12.232476  375556 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-292054 && echo "default-k8s-diff-port-292054" | sudo tee /etc/hostname
	I0108 22:16:12.382972  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-292054
	
	I0108 22:16:12.383015  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.386658  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.387055  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.387110  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.387410  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.387780  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.388020  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.388284  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.388576  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:12.388935  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:12.388954  375556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-292054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-292054/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-292054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:12.536473  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:12.536514  375556 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:16:12.536597  375556 buildroot.go:174] setting up certificates
	I0108 22:16:12.536619  375556 provision.go:83] configureAuth start
	I0108 22:16:12.536638  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.536995  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:12.540248  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.540775  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.540813  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.540982  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.544343  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.544924  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.544986  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.545143  375556 provision.go:138] copyHostCerts
	I0108 22:16:12.545241  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:16:12.545256  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:16:12.545329  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:16:12.545468  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:16:12.545485  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:16:12.545525  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:16:12.545603  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:16:12.545612  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:16:12.545630  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:16:12.545717  375556 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-292054 san=[192.168.50.18 192.168.50.18 localhost 127.0.0.1 minikube default-k8s-diff-port-292054]
	I0108 22:16:12.853268  375556 provision.go:172] copyRemoteCerts
	I0108 22:16:12.853332  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:12.853359  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.856503  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.856926  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.856959  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.857295  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.857536  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.857699  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.857904  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:12.961751  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:12.999065  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 22:16:13.037282  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:16:13.075006  375556 provision.go:86] duration metric: configureAuth took 538.367435ms
	I0108 22:16:13.075048  375556 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:13.075403  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:16:13.075509  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.078643  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.079141  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.079187  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.079518  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.079765  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.079976  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.080145  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.080388  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:13.080860  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:13.080891  375556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:16:13.523316  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:16:13.523355  375556 machine.go:91] provisioned docker machine in 1.296708962s
	I0108 22:16:13.523391  375556 start.go:300] post-start starting for "default-k8s-diff-port-292054" (driver="kvm2")
	I0108 22:16:13.523427  375556 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:16:13.523458  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.523937  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:16:13.523982  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.528392  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.528941  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.529005  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.529344  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.529715  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.529947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.530160  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:13.644605  375556 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:16:13.651917  375556 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:16:13.651970  375556 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:16:13.652120  375556 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:16:13.652268  375556 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:16:13.652452  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:16:13.667715  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:13.707995  375556 start.go:303] post-start completed in 184.580746ms
	I0108 22:16:13.708032  375556 fix.go:56] fixHost completed within 21.398677633s
	I0108 22:16:13.708061  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.712186  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.712754  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.712785  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.713001  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.713308  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.713572  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.713784  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.714062  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:13.714576  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:13.714597  375556 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:16:13.862558  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752173.800899341
	
	I0108 22:16:13.862600  375556 fix.go:206] guest clock: 1704752173.800899341
	I0108 22:16:13.862613  375556 fix.go:219] Guest: 2024-01-08 22:16:13.800899341 +0000 UTC Remote: 2024-01-08 22:16:13.708038237 +0000 UTC m=+267.678081968 (delta=92.861104ms)
	I0108 22:16:13.862688  375556 fix.go:190] guest clock delta is within tolerance: 92.861104ms
	I0108 22:16:13.862700  375556 start.go:83] releasing machines lock for "default-k8s-diff-port-292054", held for 21.553389859s
	I0108 22:16:13.862760  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.863344  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:13.867702  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.868132  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.868160  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.868553  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869294  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869606  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869710  375556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:16:13.869908  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.870024  375556 ssh_runner.go:195] Run: cat /version.json
	I0108 22:16:13.870055  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.874047  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.874604  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.874637  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876082  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876102  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.876135  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.876339  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876083  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.876354  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.876518  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.876771  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.876808  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.876928  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:13.877140  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:14.020544  375556 ssh_runner.go:195] Run: systemctl --version
	I0108 22:16:14.030180  375556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:16:14.192218  375556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:16:14.200925  375556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:16:14.201038  375556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:16:14.223169  375556 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:16:14.223200  375556 start.go:475] detecting cgroup driver to use...
	I0108 22:16:14.223274  375556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:16:14.246782  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:16:14.264283  375556 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:16:14.264417  375556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:16:14.281460  375556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:16:14.295968  375556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:16:14.443907  375556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:16:14.611299  375556 docker.go:219] disabling docker service ...
	I0108 22:16:14.611425  375556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:16:14.630493  375556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:16:14.649912  375556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:16:14.787666  375556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:16:14.971826  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:16:15.004969  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:16:15.032889  375556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:16:15.032982  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.050131  375556 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:16:15.050223  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.066011  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.082365  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.098387  375556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:16:15.115648  375556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:16:15.129675  375556 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:16:15.129848  375556 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:16:15.151333  375556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:16:15.165637  375556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:16:15.308416  375556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:16:15.580204  375556 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:16:15.580284  375556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:16:15.587895  375556 start.go:543] Will wait 60s for crictl version
	I0108 22:16:15.588108  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:16:15.594471  375556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:16:15.645175  375556 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:16:15.645273  375556 ssh_runner.go:195] Run: crio --version
	I0108 22:16:15.707630  375556 ssh_runner.go:195] Run: crio --version
	I0108 22:16:15.779275  375556 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:16:15.781032  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:15.784486  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:15.784896  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:15.784965  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:15.785126  375556 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0108 22:16:15.790707  375556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:15.810441  375556 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:16:15.810515  375556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:15.867423  375556 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:16:15.867591  375556 ssh_runner.go:195] Run: which lz4
	I0108 22:16:15.873029  375556 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:16:15.879394  375556 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:16:15.879500  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:16:12.867258  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.367211  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.866433  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.366622  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.866611  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.907073  375293 api_server.go:72] duration metric: took 3.040854669s to wait for apiserver process to appear ...
	I0108 22:16:14.907116  375293 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:14.907141  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:15.738179  374880 main.go:141] libmachine: (old-k8s-version-079759) Waiting to get IP...
	I0108 22:16:15.739231  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:15.739808  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:15.739893  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:15.739787  376492 retry.go:31] will retry after 271.587986ms: waiting for machine to come up
	I0108 22:16:16.013648  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.014344  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.014388  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.014267  376492 retry.go:31] will retry after 376.425749ms: waiting for machine to come up
	I0108 22:16:16.392497  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.392985  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.393013  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.392894  376492 retry.go:31] will retry after 340.776058ms: waiting for machine to come up
	I0108 22:16:16.735696  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.736412  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.736452  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.736349  376492 retry.go:31] will retry after 559.6759ms: waiting for machine to come up
	I0108 22:16:17.297397  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:17.297990  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:17.298027  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:17.297965  376492 retry.go:31] will retry after 738.214425ms: waiting for machine to come up
	I0108 22:16:18.038578  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:18.039239  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:18.039269  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:18.039120  376492 retry.go:31] will retry after 762.268706ms: waiting for machine to come up
	I0108 22:16:18.803986  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:18.804560  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:18.804589  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:18.804438  376492 retry.go:31] will retry after 1.027542644s: waiting for machine to come up
	I0108 22:16:15.104174  375205 pod_ready.go:92] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:15.104208  375205 pod_ready.go:81] duration metric: took 5.01907031s waiting for pod "coredns-76f75df574-v8fsw" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:15.104223  375205 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:17.117526  375205 pod_ready.go:102] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:19.615842  375205 pod_ready.go:102] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:17.971748  375556 crio.go:444] Took 2.098761 seconds to copy over tarball
	I0108 22:16:17.971905  375556 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:16:19.481826  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:19.481865  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:19.481883  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:19.529381  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:19.529427  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:19.907613  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:19.914772  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:19.914824  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:20.407461  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:20.418184  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:20.418238  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:20.908072  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:20.920042  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:20.920085  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:21.407506  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:21.414375  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I0108 22:16:21.428398  375293 api_server.go:141] control plane version: v1.28.4
	I0108 22:16:21.428439  375293 api_server.go:131] duration metric: took 6.521312808s to wait for apiserver health ...
	I0108 22:16:21.428451  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:16:21.428460  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:21.920874  375293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:22.268512  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:22.284953  375293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:22.309346  375293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:22.465452  375293 system_pods.go:59] 9 kube-system pods found
	I0108 22:16:22.465501  375293 system_pods.go:61] "coredns-5dd5756b68-wxfs6" [965cab31-c39a-4885-bc6f-6575fe026794] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:22.465516  375293 system_pods.go:61] "coredns-5dd5756b68-zbjfn" [1b521296-8e4c-4252-a729-5727cd71d3f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:22.465534  375293 system_pods.go:61] "etcd-embed-certs-903819" [be30d1b3-e4a8-4daf-9c0e-f3b776499471] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:22.465546  375293 system_pods.go:61] "kube-apiserver-embed-certs-903819" [530546d9-1cec-45f5-9e3e-f5d08e913cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:22.465563  375293 system_pods.go:61] "kube-controller-manager-embed-certs-903819" [bb0d60c9-cdaf-491d-aa20-5a522f351e17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:22.465573  375293 system_pods.go:61] "kube-proxy-gjlx8" [9247e922-69de-4e59-a6d2-06c791d43031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:22.465586  375293 system_pods.go:61] "kube-scheduler-embed-certs-903819" [1aa50057-5aa4-44b2-a762-6f0eee5b3856] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:22.465602  375293 system_pods.go:61] "metrics-server-57f55c9bc5-jswgz" [8f18e01f-981d-48fe-9ce6-5155794da657] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:22.465614  375293 system_pods.go:61] "storage-provisioner" [ea2ac609-5857-4597-9432-e2f4f4630ee2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:22.465629  375293 system_pods.go:74] duration metric: took 156.242171ms to wait for pod list to return data ...
	I0108 22:16:22.465643  375293 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:22.523465  375293 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:22.523529  375293 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:22.523552  375293 node_conditions.go:105] duration metric: took 57.897769ms to run NodePressure ...
	I0108 22:16:22.523585  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:19.833814  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:19.834296  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:19.834341  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:19.834229  376492 retry.go:31] will retry after 1.469300536s: waiting for machine to come up
	I0108 22:16:21.305138  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:21.305962  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:21.306001  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:21.305834  376492 retry.go:31] will retry after 1.215696449s: waiting for machine to come up
	I0108 22:16:22.523937  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:22.524780  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:22.524813  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:22.524676  376492 retry.go:31] will retry after 1.652609537s: waiting for machine to come up
	I0108 22:16:24.179958  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:24.180881  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:24.180910  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:24.180780  376492 retry.go:31] will retry after 2.03835476s: waiting for machine to come up
	I0108 22:16:21.115112  375205 pod_ready.go:92] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.115153  375205 pod_ready.go:81] duration metric: took 6.010921481s waiting for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.115169  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.130056  375205 pod_ready.go:92] pod "kube-apiserver-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.130113  375205 pod_ready.go:81] duration metric: took 14.932775ms waiting for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.130137  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.149011  375205 pod_ready.go:92] pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.149054  375205 pod_ready.go:81] duration metric: took 18.905543ms waiting for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.149071  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dnbvk" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.162994  375205 pod_ready.go:92] pod "kube-proxy-dnbvk" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.163037  375205 pod_ready.go:81] duration metric: took 13.956516ms waiting for pod "kube-proxy-dnbvk" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.163053  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.172926  375205 pod_ready.go:92] pod "kube-scheduler-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.172975  375205 pod_ready.go:81] duration metric: took 9.906476ms waiting for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.172991  375205 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:23.182086  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:22.162439  375556 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.190451334s)
	I0108 22:16:22.162503  375556 crio.go:451] Took 4.190696 seconds to extract the tarball
	I0108 22:16:22.162522  375556 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:16:22.212617  375556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:22.290948  375556 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:16:22.290982  375556 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:16:22.291067  375556 ssh_runner.go:195] Run: crio config
	I0108 22:16:22.361099  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:16:22.361135  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:22.361166  375556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:16:22.361192  375556 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.18 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-292054 NodeName:default-k8s-diff-port-292054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:16:22.361488  375556 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.18
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-292054"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:16:22.361599  375556 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-292054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 22:16:22.361681  375556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:16:22.376350  375556 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:16:22.376489  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:16:22.389808  375556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0108 22:16:22.414305  375556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:16:22.433716  375556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0108 22:16:22.461925  375556 ssh_runner.go:195] Run: grep 192.168.50.18	control-plane.minikube.internal$ /etc/hosts
	I0108 22:16:22.467236  375556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:22.484487  375556 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054 for IP: 192.168.50.18
	I0108 22:16:22.484537  375556 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:16:22.484688  375556 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:16:22.484724  375556 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:16:22.484794  375556 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/client.key
	I0108 22:16:22.484845  375556 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.key.4ed28ecc
	I0108 22:16:22.484886  375556 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.key
	I0108 22:16:22.485012  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:16:22.485042  375556 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:16:22.485056  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:16:22.485077  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:16:22.485107  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:16:22.485133  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:16:22.485182  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:22.485917  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:16:22.516640  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:16:22.554723  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:22.589730  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:22.624933  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:22.656950  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:22.691213  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:22.725882  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:22.757465  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:22.789479  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:22.818877  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:22.848834  375556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:22.869951  375556 ssh_runner.go:195] Run: openssl version
	I0108 22:16:22.877921  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:22.892998  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.899697  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.899798  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.906225  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:22.918957  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:22.930809  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.937461  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.937595  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.945257  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:22.956453  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:22.969894  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.976162  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.976249  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.983601  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:22.995487  375556 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:23.002869  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:23.011231  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:23.019450  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:23.028645  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:23.036530  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:23.044216  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:23.050779  375556 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:23.050875  375556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:23.050968  375556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:23.098736  375556 cri.go:89] found id: ""
	I0108 22:16:23.098806  375556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:23.110702  375556 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:23.110738  375556 kubeadm.go:636] restartCluster start
	I0108 22:16:23.110807  375556 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:23.122131  375556 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.124018  375556 kubeconfig.go:92] found "default-k8s-diff-port-292054" server: "https://192.168.50.18:8444"
	I0108 22:16:23.127827  375556 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:23.141921  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:23.142029  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:23.155738  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.642320  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:23.642416  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:23.655783  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:24.142361  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:24.142522  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:24.161739  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:24.642247  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:24.642392  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:24.659564  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:25.142097  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:25.142341  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:25.156773  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:25.642249  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:25.642362  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:25.655785  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.802042  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.278422708s)
	I0108 22:16:23.802099  375293 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:23.816719  375293 kubeadm.go:787] kubelet initialised
	I0108 22:16:23.816770  375293 kubeadm.go:788] duration metric: took 14.659036ms waiting for restarted kubelet to initialise ...
	I0108 22:16:23.816787  375293 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:23.831999  375293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:25.843652  375293 pod_ready.go:102] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:26.220729  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:26.221388  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:26.221424  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:26.221322  376492 retry.go:31] will retry after 2.215929666s: waiting for machine to come up
	I0108 22:16:28.440185  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:28.440859  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:28.440894  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:28.440781  376492 retry.go:31] will retry after 4.455149908s: waiting for machine to come up
	I0108 22:16:25.184929  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:27.682851  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:29.685033  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:26.142553  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:26.142728  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:26.160691  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:26.642356  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:26.642469  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:26.656481  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.142104  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:27.142265  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:27.157378  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.642473  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:27.642577  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:27.656662  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:28.142925  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:28.143080  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:28.160815  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:28.642072  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:28.642188  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:28.662580  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:29.142008  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:29.142158  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:29.161132  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:29.642780  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:29.642919  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:29.661247  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:30.142588  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:30.142747  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:30.159262  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:30.642472  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:30.642650  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:30.659741  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.847129  375293 pod_ready.go:102] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:30.347456  375293 pod_ready.go:92] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:30.347490  375293 pod_ready.go:81] duration metric: took 6.51546229s waiting for pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.347501  375293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.354929  375293 pod_ready.go:92] pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:30.354955  375293 pod_ready.go:81] duration metric: took 7.447354ms waiting for pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.354965  375293 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.867755  375293 pod_ready.go:92] pod "etcd-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.867788  375293 pod_ready.go:81] duration metric: took 1.512815387s waiting for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.867801  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.875662  375293 pod_ready.go:92] pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.875711  375293 pod_ready.go:81] duration metric: took 7.899159ms waiting for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.875730  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.885348  375293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.885395  375293 pod_ready.go:81] duration metric: took 9.655438ms waiting for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.885410  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gjlx8" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.943389  375293 pod_ready.go:92] pod "kube-proxy-gjlx8" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.943424  375293 pod_ready.go:81] duration metric: took 58.006295ms waiting for pod "kube-proxy-gjlx8" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.943435  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.337716  375293 pod_ready.go:92] pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:32.337752  375293 pod_ready.go:81] duration metric: took 394.305103ms waiting for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.337763  375293 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.901098  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:32.901564  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:32.901601  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:32.901488  376492 retry.go:31] will retry after 3.655042594s: waiting for machine to come up
	I0108 22:16:32.182102  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:34.685634  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:31.142410  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:31.142532  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:31.156191  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:31.642990  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:31.643137  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:31.656623  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:32.142116  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:32.142225  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:32.155597  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:32.642804  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:32.642897  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:32.656038  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:33.142630  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:33.142742  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:33.155977  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:33.156022  375556 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:33.156049  375556 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:33.156064  375556 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:33.156127  375556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:33.205442  375556 cri.go:89] found id: ""
	I0108 22:16:33.205556  375556 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:33.225775  375556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:33.236014  375556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:33.236122  375556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:33.246331  375556 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:33.246385  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:33.389338  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.044093  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.279910  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.436859  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.536169  375556 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:34.536274  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:35.036740  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:35.536732  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:36.036604  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:34.346227  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.347971  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.558150  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.558817  374880 main.go:141] libmachine: (old-k8s-version-079759) Found IP for machine: 192.168.39.183
	I0108 22:16:36.558839  374880 main.go:141] libmachine: (old-k8s-version-079759) Reserving static IP address...
	I0108 22:16:36.558855  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has current primary IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.559397  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "old-k8s-version-079759", mac: "52:54:00:79:02:7b", ip: "192.168.39.183"} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.559451  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | skip adding static IP to network mk-old-k8s-version-079759 - found existing host DHCP lease matching {name: "old-k8s-version-079759", mac: "52:54:00:79:02:7b", ip: "192.168.39.183"}
	I0108 22:16:36.559471  374880 main.go:141] libmachine: (old-k8s-version-079759) Reserved static IP address: 192.168.39.183
	I0108 22:16:36.559495  374880 main.go:141] libmachine: (old-k8s-version-079759) Waiting for SSH to be available...
	I0108 22:16:36.559511  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Getting to WaitForSSH function...
	I0108 22:16:36.562077  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.562439  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.562496  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.562806  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Using SSH client type: external
	I0108 22:16:36.562846  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa (-rw-------)
	I0108 22:16:36.562938  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:16:36.562985  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | About to run SSH command:
	I0108 22:16:36.563005  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | exit 0
	I0108 22:16:36.655957  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | SSH cmd err, output: <nil>: 
	I0108 22:16:36.656393  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetConfigRaw
	I0108 22:16:36.657349  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:36.660624  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.661056  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.661097  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.661415  374880 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/config.json ...
	I0108 22:16:36.661673  374880 machine.go:88] provisioning docker machine ...
	I0108 22:16:36.661699  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:36.662007  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.662224  374880 buildroot.go:166] provisioning hostname "old-k8s-version-079759"
	I0108 22:16:36.662249  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.662416  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.665572  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.666013  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.666056  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.666311  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:36.666582  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.666770  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.666945  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:36.667141  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:36.667677  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:36.667700  374880 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-079759 && echo "old-k8s-version-079759" | sudo tee /etc/hostname
	I0108 22:16:36.813113  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-079759
	
	I0108 22:16:36.813174  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.816444  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.816774  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.816814  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.816995  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:36.817323  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.817559  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.817739  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:36.817969  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:36.818431  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:36.818461  374880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-079759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-079759/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-079759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:36.952252  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:36.952306  374880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:16:36.952343  374880 buildroot.go:174] setting up certificates
	I0108 22:16:36.952359  374880 provision.go:83] configureAuth start
	I0108 22:16:36.952372  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.952803  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:36.955895  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.956276  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.956310  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.956579  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.959251  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.959667  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.959723  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.959825  374880 provision.go:138] copyHostCerts
	I0108 22:16:36.959896  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:16:36.959909  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:16:36.959987  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:16:36.960106  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:16:36.960122  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:16:36.960152  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:16:36.960240  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:16:36.960251  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:16:36.960286  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:16:36.960370  374880 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-079759 san=[192.168.39.183 192.168.39.183 localhost 127.0.0.1 minikube old-k8s-version-079759]
	I0108 22:16:37.054312  374880 provision.go:172] copyRemoteCerts
	I0108 22:16:37.054396  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:37.054428  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.058048  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.058545  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.058580  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.058823  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.059165  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.059439  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.059614  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.158033  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:16:37.190220  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:37.219035  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 22:16:37.246894  374880 provision.go:86] duration metric: configureAuth took 294.516334ms
	I0108 22:16:37.246938  374880 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:37.247165  374880 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:16:37.247269  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.250766  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.251305  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.251344  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.251654  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.251992  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.252253  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.252456  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.252701  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:37.253066  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:37.253091  374880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:16:37.626837  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:16:37.626886  374880 machine.go:91] provisioned docker machine in 965.198968ms
	I0108 22:16:37.626899  374880 start.go:300] post-start starting for "old-k8s-version-079759" (driver="kvm2")
	I0108 22:16:37.626924  374880 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:16:37.626991  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.627562  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:16:37.627626  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.631567  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.631840  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.631876  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.632070  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.632322  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.632578  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.632749  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.732984  374880 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:16:37.740111  374880 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:16:37.740158  374880 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:16:37.740268  374880 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:16:37.740384  374880 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:16:37.740527  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:16:37.751840  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:37.780796  374880 start.go:303] post-start completed in 153.87709ms
	I0108 22:16:37.780833  374880 fix.go:56] fixHost completed within 23.917911044s
	I0108 22:16:37.780861  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.784200  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.784663  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.784698  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.784916  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.785192  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.785482  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.785652  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.785819  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:37.786310  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:37.786334  374880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:16:37.908632  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752197.846451761
	
	I0108 22:16:37.908664  374880 fix.go:206] guest clock: 1704752197.846451761
	I0108 22:16:37.908677  374880 fix.go:219] Guest: 2024-01-08 22:16:37.846451761 +0000 UTC Remote: 2024-01-08 22:16:37.780837729 +0000 UTC m=+368.040141999 (delta=65.614032ms)
	I0108 22:16:37.908740  374880 fix.go:190] guest clock delta is within tolerance: 65.614032ms
	I0108 22:16:37.908756  374880 start.go:83] releasing machines lock for "old-k8s-version-079759", held for 24.045885784s
	I0108 22:16:37.908801  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.909113  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:37.912363  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.912708  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.912745  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.913058  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913581  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913769  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913860  374880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:16:37.913906  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.914052  374880 ssh_runner.go:195] Run: cat /version.json
	I0108 22:16:37.914081  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.916674  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917009  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917330  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.917371  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917433  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.917523  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.917545  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917622  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.917791  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.917862  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.917973  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.918026  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.918185  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.918303  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:38.009398  374880 ssh_runner.go:195] Run: systemctl --version
	I0108 22:16:38.040945  374880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:16:38.191198  374880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:16:38.198405  374880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:16:38.198504  374880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:16:38.218602  374880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:16:38.218641  374880 start.go:475] detecting cgroup driver to use...
	I0108 22:16:38.218722  374880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:16:38.234161  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:16:38.250033  374880 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:16:38.250107  374880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:16:38.266262  374880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:16:38.281553  374880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:16:38.402503  374880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:16:38.558016  374880 docker.go:219] disabling docker service ...
	I0108 22:16:38.558124  374880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:16:38.573689  374880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:16:38.589002  374880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:16:38.718943  374880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:16:38.853252  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:16:38.869464  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:16:38.890384  374880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 22:16:38.890538  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.904645  374880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:16:38.904745  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.916308  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.927747  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.938877  374880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:16:38.951536  374880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:16:38.961810  374880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:16:38.961889  374880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:16:38.976131  374880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:16:38.990253  374880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:16:39.129313  374880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:16:39.322691  374880 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:16:39.322796  374880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:16:39.329204  374880 start.go:543] Will wait 60s for crictl version
	I0108 22:16:39.329317  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:39.333991  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:16:39.381363  374880 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:16:39.381484  374880 ssh_runner.go:195] Run: crio --version
	I0108 22:16:39.435964  374880 ssh_runner.go:195] Run: crio --version
	I0108 22:16:39.499543  374880 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0108 22:16:39.501084  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:39.504205  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:39.504541  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:39.504579  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:39.504935  374880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 22:16:39.510323  374880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:39.526998  374880 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:16:39.527057  374880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:39.577709  374880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0108 22:16:39.577793  374880 ssh_runner.go:195] Run: which lz4
	I0108 22:16:39.582925  374880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:16:39.589373  374880 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:16:39.589421  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0108 22:16:37.184707  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:39.683810  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.537007  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:37.037157  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:37.061202  375556 api_server.go:72] duration metric: took 2.525037167s to wait for apiserver process to appear ...
	I0108 22:16:37.061229  375556 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:37.061250  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:37.061790  375556 api_server.go:269] stopped: https://192.168.50.18:8444/healthz: Get "https://192.168.50.18:8444/healthz": dial tcp 192.168.50.18:8444: connect: connection refused
	I0108 22:16:37.561995  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:38.852752  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:41.361118  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:42.562614  375556 api_server.go:269] stopped: https://192.168.50.18:8444/healthz: Get "https://192.168.50.18:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 22:16:42.562680  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:42.626918  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:42.626956  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:43.061435  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:43.078776  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:43.078841  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:43.561364  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:43.575304  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:43.575397  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:44.061694  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:44.072328  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:44.072394  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:44.561536  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:44.572055  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 200:
	ok
	I0108 22:16:44.586947  375556 api_server.go:141] control plane version: v1.28.4
	I0108 22:16:44.587011  375556 api_server.go:131] duration metric: took 7.52577273s to wait for apiserver health ...
	I0108 22:16:44.587029  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:16:44.587040  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:44.765569  375556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:41.520470  374880 crio.go:444] Took 1.937584 seconds to copy over tarball
	I0108 22:16:41.520541  374880 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:16:41.683864  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:44.183495  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:44.867194  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:44.881203  375556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:44.906051  375556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:44.958770  375556 system_pods.go:59] 8 kube-system pods found
	I0108 22:16:44.958813  375556 system_pods.go:61] "coredns-5dd5756b68-vcmh6" [4d87af85-075d-427c-b4ca-ba57421fc8de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:44.958823  375556 system_pods.go:61] "etcd-default-k8s-diff-port-292054" [5353bc6f-061b-414b-823b-fa224887733c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:44.958831  375556 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-292054" [aa609bfc-ba8f-4d82-bdcd-2f17e0b1b2a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:44.958838  375556 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-292054" [2500070d-a348-47a9-a1d6-525eb3ee12d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:44.958847  375556 system_pods.go:61] "kube-proxy-f4xsp" [d0987c89-c598-4ae9-a60a-bad8df066d0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:44.958867  375556 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-292054" [9b4e73b7-a4ff-469f-b03e-1170d068af2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:44.958883  375556 system_pods.go:61] "metrics-server-57f55c9bc5-6w57p" [7a85be99-ad7e-4866-a8d8-0972435dfd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:44.958899  375556 system_pods.go:61] "storage-provisioner" [4be6edbe-cb8e-4598-9d23-1cefc0afc184] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:44.958908  375556 system_pods.go:74] duration metric: took 52.82566ms to wait for pod list to return data ...
	I0108 22:16:44.958923  375556 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:44.965171  375556 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:44.965220  375556 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:44.965235  375556 node_conditions.go:105] duration metric: took 6.306299ms to run NodePressure ...
	I0108 22:16:44.965271  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:43.845812  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:45.851004  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:45.115268  374880 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.594690355s)
	I0108 22:16:45.115304  374880 crio.go:451] Took 3.594805 seconds to extract the tarball
	I0108 22:16:45.115316  374880 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:16:45.165012  374880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:45.542219  374880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0108 22:16:45.542266  374880 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:16:45.542362  374880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:45.542384  374880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.542409  374880 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 22:16:45.542451  374880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.542489  374880 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.542392  374880 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.542666  374880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.542661  374880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.543883  374880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.543921  374880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.543888  374880 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 22:16:45.543944  374880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.543888  374880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:45.543970  374880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.543895  374880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.544327  374880 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.737830  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.747956  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0108 22:16:45.780688  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.799788  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.811226  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.819948  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.857132  374880 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0108 22:16:45.857195  374880 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.857257  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.867494  374880 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0108 22:16:45.867547  374880 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0108 22:16:45.867622  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.871438  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.900657  374880 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0108 22:16:45.900706  374880 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.900755  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.986789  374880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0108 22:16:45.986850  374880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.986909  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.001283  374880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0108 22:16:46.001335  374880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:46.001389  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.009750  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0108 22:16:46.009783  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0108 22:16:46.009830  374880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0108 22:16:46.009848  374880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0108 22:16:46.009879  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:46.009887  374880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:46.009887  374880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:46.009904  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:46.009929  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.009967  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:46.009933  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.173258  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0108 22:16:46.173293  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 22:16:46.173387  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:46.173402  374880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.173451  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0108 22:16:46.173458  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0108 22:16:46.173539  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:46.173588  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0108 22:16:46.238533  374880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0108 22:16:46.238562  374880 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.238589  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0108 22:16:46.238619  374880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.238692  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0108 22:16:46.499734  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:47.197262  374880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0108 22:16:47.197344  374880 cache_images.go:92] LoadImages completed in 1.65506117s
	W0108 22:16:47.197431  374880 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0108 22:16:47.197628  374880 ssh_runner.go:195] Run: crio config
	I0108 22:16:47.273121  374880 cni.go:84] Creating CNI manager for ""
	I0108 22:16:47.273164  374880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:47.273206  374880 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:16:47.273242  374880 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-079759 NodeName:old-k8s-version-079759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 22:16:47.273439  374880 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-079759"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-079759
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.183:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:16:47.273557  374880 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-079759 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079759 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:16:47.273641  374880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 22:16:47.284374  374880 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:16:47.284528  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:16:47.295740  374880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 22:16:47.317874  374880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:16:47.339820  374880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0108 22:16:47.365063  374880 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0108 22:16:47.369942  374880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:47.387586  374880 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759 for IP: 192.168.39.183
	I0108 22:16:47.387637  374880 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:16:47.387862  374880 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:16:47.387929  374880 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:16:47.388036  374880 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.key
	I0108 22:16:47.388144  374880 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.key.a2b84326
	I0108 22:16:47.388185  374880 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.key
	I0108 22:16:47.388370  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:16:47.388426  374880 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:16:47.388449  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:16:47.388490  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:16:47.388524  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:16:47.388562  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:16:47.388629  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:47.389626  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:16:47.424129  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:16:47.455835  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:47.489732  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:47.523253  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:47.555019  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:47.587218  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:47.620629  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:47.654460  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:47.688945  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:47.722824  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:47.754016  374880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:47.773665  374880 ssh_runner.go:195] Run: openssl version
	I0108 22:16:47.779972  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:47.794327  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.801998  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.802101  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.808765  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:47.822088  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:47.836322  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.843412  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.843508  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.852467  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:47.871573  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:47.886132  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.892165  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.892250  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.898728  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:47.911118  374880 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:47.918486  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:47.928188  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:47.936324  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:47.942939  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:47.952136  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:47.962062  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:47.969861  374880 kubeadm.go:404] StartCluster: {Name:old-k8s-version-079759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-079759 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:47.969986  374880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:47.970065  374880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:48.023933  374880 cri.go:89] found id: ""
	I0108 22:16:48.024025  374880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:48.040341  374880 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:48.040377  374880 kubeadm.go:636] restartCluster start
	I0108 22:16:48.040461  374880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:48.051709  374880 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:48.053467  374880 kubeconfig.go:92] found "old-k8s-version-079759" server: "https://192.168.39.183:8443"
	I0108 22:16:48.057824  374880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:48.071248  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:48.071367  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:48.086864  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:48.572297  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:48.572426  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:48.590996  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:49.072205  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:49.072316  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:49.085908  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:49.571496  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:49.571641  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:49.587609  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:46.683555  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:48.683848  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:47.463595  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.498282893s)
	I0108 22:16:47.463651  375556 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:47.494376  375556 kubeadm.go:787] kubelet initialised
	I0108 22:16:47.494409  375556 kubeadm.go:788] duration metric: took 30.746268ms waiting for restarted kubelet to initialise ...
	I0108 22:16:47.494419  375556 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:47.518711  375556 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:49.532387  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:47.854322  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:50.347325  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:52.349479  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:50.071318  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:50.071492  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:50.087514  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:50.572137  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:50.572248  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:50.586581  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.072060  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:51.072182  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:51.087008  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.571464  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:51.571586  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:51.585684  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:52.072246  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:52.072323  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:52.087689  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:52.572243  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:52.572347  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:52.587037  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:53.071470  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:53.071589  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:53.086911  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:53.571460  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:53.571553  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:53.586045  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:54.072236  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:54.072358  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:54.087701  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:54.572312  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:54.572446  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:54.587922  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.181229  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:53.182527  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:52.026615  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:54.027979  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:54.849162  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:57.346988  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:55.071292  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:55.071441  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:55.090623  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:55.572144  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:55.572231  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:55.587405  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:56.071926  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:56.072056  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:56.086264  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:56.571790  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:56.571930  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:56.586088  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:57.071438  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:57.071546  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:57.087310  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:57.571491  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:57.571640  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:57.585754  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:58.071604  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:58.071723  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:58.087027  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:58.087070  374880 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:58.087086  374880 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:58.087128  374880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:58.087206  374880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:58.137792  374880 cri.go:89] found id: ""
	I0108 22:16:58.137875  374880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:58.157140  374880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:58.171953  374880 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:58.172029  374880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:58.186287  374880 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:58.186325  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:58.316514  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.124691  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.386136  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.490503  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.609542  374880 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:59.609648  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:55.684783  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:58.189882  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:56.527144  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:58.529935  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:01.030202  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:59.350073  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:01.845861  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:00.109804  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:00.610728  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.110191  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.609754  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.638919  374880 api_server.go:72] duration metric: took 2.029378055s to wait for apiserver process to appear ...
	I0108 22:17:01.638952  374880 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:17:01.638975  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:00.681951  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:02.683028  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:04.685040  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:03.527242  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:05.527888  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:03.850211  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:06.350594  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:06.639278  374880 api_server.go:269] stopped: https://192.168.39.183:8443/healthz: Get "https://192.168.39.183:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 22:17:06.639347  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.110234  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:17:08.110269  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:17:08.110287  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.268403  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.268437  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:08.268451  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.300726  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.300787  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:08.639135  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.676558  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.676598  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:09.139592  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:09.151081  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:09.151120  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:09.639741  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:09.646812  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0108 22:17:09.656279  374880 api_server.go:141] control plane version: v1.16.0
	I0108 22:17:09.656318  374880 api_server.go:131] duration metric: took 8.017357804s to wait for apiserver health ...
	I0108 22:17:09.656333  374880 cni.go:84] Creating CNI manager for ""
	I0108 22:17:09.656342  374880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:17:09.658633  374880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:17:09.660081  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:17:09.670922  374880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:17:09.697148  374880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:17:09.710916  374880 system_pods.go:59] 7 kube-system pods found
	I0108 22:17:09.710958  374880 system_pods.go:61] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:09.710966  374880 system_pods.go:61] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:09.710974  374880 system_pods.go:61] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:09.710982  374880 system_pods.go:61] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Pending
	I0108 22:17:09.710988  374880 system_pods.go:61] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:09.710994  374880 system_pods.go:61] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:09.710999  374880 system_pods.go:61] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:09.711007  374880 system_pods.go:74] duration metric: took 13.819282ms to wait for pod list to return data ...
	I0108 22:17:09.711017  374880 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:17:09.717809  374880 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:17:09.717862  374880 node_conditions.go:123] node cpu capacity is 2
	I0108 22:17:09.717882  374880 node_conditions.go:105] duration metric: took 6.857808ms to run NodePressure ...
	I0108 22:17:09.717921  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:17:07.181980  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:09.182492  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:10.147851  374880 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:17:10.155593  374880 kubeadm.go:787] kubelet initialised
	I0108 22:17:10.155627  374880 kubeadm.go:788] duration metric: took 7.730921ms waiting for restarted kubelet to initialise ...
	I0108 22:17:10.155636  374880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:10.162330  374880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.173343  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.173384  374880 pod_ready.go:81] duration metric: took 11.015314ms waiting for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.173398  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.173408  374880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.181308  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "etcd-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.181354  374880 pod_ready.go:81] duration metric: took 7.925248ms waiting for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.181370  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "etcd-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.181382  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.201297  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.201340  374880 pod_ready.go:81] duration metric: took 19.943972ms waiting for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.201355  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.201364  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.212246  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.212303  374880 pod_ready.go:81] duration metric: took 10.921798ms waiting for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.212326  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.212337  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.554958  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-proxy-mfs65" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.554990  374880 pod_ready.go:81] duration metric: took 342.644311ms waiting for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.555000  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-proxy-mfs65" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.555014  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.952644  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.952690  374880 pod_ready.go:81] duration metric: took 397.663927ms waiting for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.952705  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.952721  374880 pod_ready.go:38] duration metric: took 797.073923ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:10.952756  374880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:17:10.966105  374880 ops.go:34] apiserver oom_adj: -16
	I0108 22:17:10.966142  374880 kubeadm.go:640] restartCluster took 22.925755113s
	I0108 22:17:10.966160  374880 kubeadm.go:406] StartCluster complete in 22.996305207s
	I0108 22:17:10.966183  374880 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:17:10.966269  374880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:17:10.968639  374880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:17:10.968991  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:17:10.969141  374880 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:17:10.969252  374880 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969268  374880 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969273  374880 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:17:10.969292  374880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-079759"
	I0108 22:17:10.969296  374880 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-079759"
	W0108 22:17:10.969314  374880 addons.go:246] addon metrics-server should already be in state true
	I0108 22:17:10.969351  374880 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969368  374880 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-079759"
	W0108 22:17:10.969375  374880 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:17:10.969393  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.969409  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.969785  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969823  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969832  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.969824  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969916  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.969926  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.990948  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0108 22:17:10.991126  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0108 22:17:10.991782  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:10.991979  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:10.992429  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:10.992473  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:10.992593  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:10.992618  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:10.992993  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:10.993076  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:10.993348  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:10.993741  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.993822  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.997882  374880 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-079759"
	W0108 22:17:10.997908  374880 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:17:10.997937  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.998375  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.998422  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.014704  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0108 22:17:11.015259  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.015412  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0108 22:17:11.016128  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.016160  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.016532  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.017165  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:11.017214  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.017521  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.018124  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.018140  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.018560  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.018854  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.018926  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0108 22:17:11.019671  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.020333  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.020353  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.020686  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.021353  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:11.021406  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.021696  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.024514  374880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:17:11.026172  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:17:11.026202  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:17:11.026238  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.031029  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.031951  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.031979  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.032327  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.032560  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.032709  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.032862  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.039130  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0108 22:17:11.039792  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.040408  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.040426  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.040821  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.041071  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.041764  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45497
	I0108 22:17:11.042444  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.042927  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.042952  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.043292  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.043498  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.043832  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.046099  374880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:17:07.529123  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:09.529950  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:11.048145  374880 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:17:11.048189  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:17:11.048231  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.045325  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.048952  374880 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:17:11.048976  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:17:11.049021  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.052466  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.052852  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.052891  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.053248  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.053542  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.053781  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.053964  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.062218  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.062324  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.062338  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.062363  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.063474  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.063729  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.063926  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.190657  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:17:11.190690  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:17:11.221757  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:17:11.254133  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:17:11.285976  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:17:11.286005  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:17:11.365594  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:17:11.365632  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:17:11.406494  374880 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 22:17:11.459160  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:17:11.475488  374880 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-079759" context rescaled to 1 replicas
	I0108 22:17:11.475557  374880 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:17:11.478952  374880 out.go:177] * Verifying Kubernetes components...
	I0108 22:17:11.480674  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:17:12.238037  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016231756s)
	I0108 22:17:12.238158  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.238178  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.238585  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.238616  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.238630  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.238640  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.238649  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.238928  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.238953  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.292897  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.292926  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.293228  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.293249  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.297621  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.043443256s)
	I0108 22:17:12.297697  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.297717  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.298050  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.298107  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.298121  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.298136  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.298151  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.298377  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.298434  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.298449  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.460391  374880 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-079759" to be "Ready" ...
	I0108 22:17:12.460519  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.001301389s)
	I0108 22:17:12.460578  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.460600  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.460930  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.460950  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.460970  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.460980  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.461238  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.461262  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.461278  374880 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-079759"
	I0108 22:17:12.461289  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.464523  374880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0108 22:17:08.848369  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:11.349358  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:12.466030  374880 addons.go:508] enable addons completed in 1.496887794s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0108 22:17:14.465035  374880 node_ready.go:58] node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:11.186335  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:13.680427  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:12.029896  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:14.527011  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:13.847034  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:16.348875  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:16.465852  374880 node_ready.go:58] node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:18.965439  374880 node_ready.go:49] node "old-k8s-version-079759" has status "Ready":"True"
	I0108 22:17:18.965487  374880 node_ready.go:38] duration metric: took 6.505055778s waiting for node "old-k8s-version-079759" to be "Ready" ...
	I0108 22:17:18.965512  374880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:18.972414  374880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.981201  374880 pod_ready.go:92] pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.981242  374880 pod_ready.go:81] duration metric: took 8.788084ms waiting for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.981258  374880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.987118  374880 pod_ready.go:92] pod "etcd-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.987147  374880 pod_ready.go:81] duration metric: took 5.880499ms waiting for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.987165  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.995928  374880 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.995972  374880 pod_ready.go:81] duration metric: took 8.795387ms waiting for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.995990  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.006241  374880 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.006273  374880 pod_ready.go:81] duration metric: took 10.274527ms waiting for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.006288  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.366551  374880 pod_ready.go:92] pod "kube-proxy-mfs65" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.366588  374880 pod_ready.go:81] duration metric: took 360.29132ms waiting for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.366607  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.766225  374880 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.766266  374880 pod_ready.go:81] duration metric: took 399.648483ms waiting for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.766287  374880 pod_ready.go:38] duration metric: took 800.758248ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:19.766317  374880 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:17:19.766407  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:19.790384  374880 api_server.go:72] duration metric: took 8.314784167s to wait for apiserver process to appear ...
	I0108 22:17:19.790417  374880 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:17:19.790442  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:15.682742  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:18.181808  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:19.813424  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0108 22:17:19.814615  374880 api_server.go:141] control plane version: v1.16.0
	I0108 22:17:19.814638  374880 api_server.go:131] duration metric: took 24.214441ms to wait for apiserver health ...
	I0108 22:17:19.814647  374880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:17:19.967792  374880 system_pods.go:59] 7 kube-system pods found
	I0108 22:17:19.967850  374880 system_pods.go:61] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:19.967858  374880 system_pods.go:61] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:19.967865  374880 system_pods.go:61] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:19.967871  374880 system_pods.go:61] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Running
	I0108 22:17:19.967875  374880 system_pods.go:61] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:19.967882  374880 system_pods.go:61] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:19.967896  374880 system_pods.go:61] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:19.967908  374880 system_pods.go:74] duration metric: took 153.252828ms to wait for pod list to return data ...
	I0108 22:17:19.967925  374880 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:17:20.166954  374880 default_sa.go:45] found service account: "default"
	I0108 22:17:20.166999  374880 default_sa.go:55] duration metric: took 199.059234ms for default service account to be created ...
	I0108 22:17:20.167013  374880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:17:20.367805  374880 system_pods.go:86] 7 kube-system pods found
	I0108 22:17:20.367843  374880 system_pods.go:89] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:20.367851  374880 system_pods.go:89] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:20.367878  374880 system_pods.go:89] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:20.367889  374880 system_pods.go:89] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Running
	I0108 22:17:20.367895  374880 system_pods.go:89] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:20.367901  374880 system_pods.go:89] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:20.367908  374880 system_pods.go:89] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:20.367917  374880 system_pods.go:126] duration metric: took 200.897828ms to wait for k8s-apps to be running ...
	I0108 22:17:20.367931  374880 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:17:20.368002  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:17:20.384736  374880 system_svc.go:56] duration metric: took 16.789711ms WaitForService to wait for kubelet.
	I0108 22:17:20.384777  374880 kubeadm.go:581] duration metric: took 8.909185454s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:17:20.384805  374880 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:17:20.566662  374880 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:17:20.566699  374880 node_conditions.go:123] node cpu capacity is 2
	I0108 22:17:20.566713  374880 node_conditions.go:105] duration metric: took 181.900804ms to run NodePressure ...
	I0108 22:17:20.566733  374880 start.go:228] waiting for startup goroutines ...
	I0108 22:17:20.566743  374880 start.go:233] waiting for cluster config update ...
	I0108 22:17:20.566758  374880 start.go:242] writing updated cluster config ...
	I0108 22:17:20.567148  374880 ssh_runner.go:195] Run: rm -f paused
	I0108 22:17:20.625096  374880 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0108 22:17:20.627497  374880 out.go:177] 
	W0108 22:17:20.629694  374880 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0108 22:17:20.631265  374880 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0108 22:17:20.632916  374880 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-079759" cluster and "default" namespace by default
	I0108 22:17:16.529078  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:19.030929  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:18.848535  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:20.848603  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:20.182275  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:22.183490  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:24.682561  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:21.528256  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:23.529114  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:26.027560  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:23.346430  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:25.348995  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.182420  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:29.183480  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.530319  375556 pod_ready.go:92] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.530347  375556 pod_ready.go:81] duration metric: took 40.011595743s waiting for pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.530357  375556 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.537548  375556 pod_ready.go:92] pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.537577  375556 pod_ready.go:81] duration metric: took 7.212322ms waiting for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.537588  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.549788  375556 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.549830  375556 pod_ready.go:81] duration metric: took 12.233749ms waiting for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.549845  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.558337  375556 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.558364  375556 pod_ready.go:81] duration metric: took 8.510648ms waiting for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.558375  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4xsp" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.568980  375556 pod_ready.go:92] pod "kube-proxy-f4xsp" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.569008  375556 pod_ready.go:81] duration metric: took 10.626925ms waiting for pod "kube-proxy-f4xsp" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.569018  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.924746  375556 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.924792  375556 pod_ready.go:81] duration metric: took 355.765575ms waiting for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.924810  375556 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:29.934031  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.846645  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:29.848666  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:32.347317  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:31.681795  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.183509  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:31.935866  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.434680  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.850409  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:37.348417  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:36.681720  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:39.187220  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:36.933398  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:38.937527  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:39.849140  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:42.348407  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:41.681963  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:44.183281  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:41.434499  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:43.438745  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:45.934532  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:44.846802  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:46.847285  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:46.683139  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:49.180610  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:47.942228  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:50.434779  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:49.346290  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:51.346592  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:51.181365  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:53.182147  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:52.435305  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:54.933017  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:53.347169  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:55.847921  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:55.680794  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:57.683942  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:59.684807  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:56.933676  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:59.433266  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:58.346863  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:00.351598  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:02.358340  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:02.183383  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:04.684356  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:01.438892  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:03.942882  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:04.845380  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:06.850561  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:07.182060  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:09.182524  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:06.433230  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:08.435570  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:10.933834  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:08.853139  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:11.345311  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:11.183083  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.185196  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.435974  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.934920  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.347243  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.350752  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.683154  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:18.183396  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:17.938857  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.434388  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:17.849663  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.349073  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.349854  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.183740  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.681755  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.938829  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:24.940050  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:24.845935  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:26.848602  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:25.182926  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:27.681471  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:27.433983  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:29.933179  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:29.348482  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:31.848768  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:30.182593  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:32.184633  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:34.684351  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:31.935920  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:34.432407  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:33.849853  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:36.347248  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:37.185296  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:39.683266  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:36.434742  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:38.935788  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:38.347422  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:40.847846  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:42.184271  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:44.191899  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:41.434194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:43.435816  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:45.436582  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:43.348144  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:45.850291  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:46.681976  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:48.684379  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:47.934501  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:50.432989  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:48.346408  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:50.348943  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:51.181865  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:53.182990  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:52.433070  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:54.442432  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:52.846607  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:54.850642  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:57.347230  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:55.681392  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:57.683410  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:56.932551  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:58.935585  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:59.348127  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:01.848981  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:00.183662  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:02.681392  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:04.683283  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:01.433125  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:03.433714  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:05.434985  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:03.849460  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:06.349541  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:07.182372  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:09.681196  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:07.935969  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:10.435837  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:08.847292  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:10.850261  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:11.681770  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:13.683390  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:12.439563  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:14.933378  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:13.347217  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:15.847524  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:16.181226  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:18.182271  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:16.936400  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:19.433956  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:18.347048  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:20.846947  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:20.182396  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:22.681453  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:24.682678  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:21.934747  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:23.935826  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:22.847819  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:24.847981  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:27.346372  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:27.181829  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:29.686277  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:26.433266  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:28.433601  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:30.435331  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:29.349171  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:31.848107  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:31.686784  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.181838  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:32.932383  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.933487  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.349446  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:36.845807  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:36.182711  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:38.183592  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:37.433841  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:39.440368  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:38.847000  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:40.849528  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:40.681394  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:42.681803  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:41.934279  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:44.433480  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:43.346283  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:45.849805  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:45.182604  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:47.183086  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:49.681891  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:46.934165  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:49.433592  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:48.346422  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:50.346711  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:52.347386  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:52.181241  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:54.184167  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:51.435757  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:53.932937  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:55.935076  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:54.847306  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:56.849761  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:56.681736  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:59.182156  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:58.433892  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:00.435066  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:59.348176  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:01.847094  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:01.682869  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.183165  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:02.934032  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.935393  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.347516  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:06.846388  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:06.681333  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:08.684291  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:07.436354  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:09.934776  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:08.849876  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.346794  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.184760  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.681471  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.935382  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.935718  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.347573  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:15.846434  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:15.684425  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:18.182489  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:16.435556  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:18.934238  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:17.847804  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:19.851620  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:22.347305  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:20.183538  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:21.174145  375205 pod_ready.go:81] duration metric: took 4m0.001134505s waiting for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" ...
	E0108 22:20:21.174196  375205 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:20:21.174225  375205 pod_ready.go:38] duration metric: took 4m11.09670924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:20:21.174739  375205 kubeadm.go:640] restartCluster took 4m32.919154523s
	W0108 22:20:21.174932  375205 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:20:21.175031  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:20:21.437480  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:23.437985  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:25.934631  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:24.847918  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:27.354150  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:28.434309  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:30.935564  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:29.845550  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:31.847597  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:32.338942  375293 pod_ready.go:81] duration metric: took 4m0.001163118s waiting for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" ...
	E0108 22:20:32.338972  375293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:20:32.338994  375293 pod_ready.go:38] duration metric: took 4m8.522193777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:20:32.339022  375293 kubeadm.go:640] restartCluster took 4m31.730992352s
	W0108 22:20:32.339087  375293 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:20:32.339116  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:20:32.935958  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:35.434816  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:36.302806  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.127706719s)
	I0108 22:20:36.302938  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:20:36.321621  375205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:20:36.334281  375205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:20:36.346671  375205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:20:36.346717  375205 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:20:36.614321  375205 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:20:37.936328  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:40.435692  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:42.933586  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:45.434194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:48.562754  375205 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0108 22:20:48.562854  375205 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:20:48.562933  375205 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:20:48.563069  375205 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:20:48.563228  375205 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:20:48.563339  375205 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:20:48.565241  375205 out.go:204]   - Generating certificates and keys ...
	I0108 22:20:48.565369  375205 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:20:48.565449  375205 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:20:48.565542  375205 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:20:48.565610  375205 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:20:48.565733  375205 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:20:48.565840  375205 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:20:48.565938  375205 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:20:48.566036  375205 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:20:48.566148  375205 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:20:48.566255  375205 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:20:48.566336  375205 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:20:48.566437  375205 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:20:48.566521  375205 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:20:48.566606  375205 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0108 22:20:48.566682  375205 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:20:48.566771  375205 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:20:48.566859  375205 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:20:48.566957  375205 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:20:48.567046  375205 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:20:48.569013  375205 out.go:204]   - Booting up control plane ...
	I0108 22:20:48.569130  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:20:48.569247  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:20:48.569353  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:20:48.569468  375205 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:20:48.569588  375205 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:20:48.569656  375205 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:20:48.569873  375205 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:20:48.569977  375205 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002900 seconds
	I0108 22:20:48.570115  375205 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:20:48.570289  375205 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:20:48.570372  375205 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:20:48.570558  375205 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-675668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:20:48.570648  375205 kubeadm.go:322] [bootstrap-token] Using token: t5purj.kqjcf0swk5rb5mxk
	I0108 22:20:48.572249  375205 out.go:204]   - Configuring RBAC rules ...
	I0108 22:20:48.572407  375205 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:20:48.572525  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:20:48.572698  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:20:48.572845  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:20:48.572985  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:20:48.573060  375205 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:20:48.573192  375205 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:20:48.573253  375205 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:20:48.573309  375205 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:20:48.573316  375205 kubeadm.go:322] 
	I0108 22:20:48.573365  375205 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:20:48.573372  375205 kubeadm.go:322] 
	I0108 22:20:48.573433  375205 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:20:48.573440  375205 kubeadm.go:322] 
	I0108 22:20:48.573466  375205 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:20:48.573516  375205 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:20:48.573559  375205 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:20:48.573565  375205 kubeadm.go:322] 
	I0108 22:20:48.573608  375205 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:20:48.573614  375205 kubeadm.go:322] 
	I0108 22:20:48.573656  375205 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:20:48.573663  375205 kubeadm.go:322] 
	I0108 22:20:48.573705  375205 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:20:48.573774  375205 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:20:48.573830  375205 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:20:48.573836  375205 kubeadm.go:322] 
	I0108 22:20:48.573902  375205 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:20:48.573968  375205 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:20:48.573974  375205 kubeadm.go:322] 
	I0108 22:20:48.574041  375205 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t5purj.kqjcf0swk5rb5mxk \
	I0108 22:20:48.574137  375205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:20:48.574168  375205 kubeadm.go:322] 	--control-plane 
	I0108 22:20:48.574179  375205 kubeadm.go:322] 
	I0108 22:20:48.574277  375205 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:20:48.574288  375205 kubeadm.go:322] 
	I0108 22:20:48.574369  375205 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t5purj.kqjcf0swk5rb5mxk \
	I0108 22:20:48.574510  375205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:20:48.574532  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:20:48.574545  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:20:48.576776  375205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:20:48.578238  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:20:48.605767  375205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:20:48.656602  375205 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:20:48.656700  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=no-preload-675668 minikube.k8s.io/updated_at=2024_01_08T22_20_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:48.656701  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:48.954525  375205 ops.go:34] apiserver oom_adj: -16
	I0108 22:20:48.954705  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:49.454907  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.014263  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (17.675119667s)
	I0108 22:20:50.014357  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:20:50.032616  375293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:20:50.046779  375293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:20:50.059243  375293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:20:50.059321  375293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:20:50.125341  375293 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:20:50.125427  375293 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:20:50.314274  375293 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:20:50.314692  375293 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:20:50.314859  375293 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:20:50.613241  375293 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:20:47.934671  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:50.435675  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:50.615123  375293 out.go:204]   - Generating certificates and keys ...
	I0108 22:20:50.615298  375293 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:20:50.615442  375293 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:20:50.615588  375293 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:20:50.615684  375293 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:20:50.615978  375293 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:20:50.616644  375293 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:20:50.617070  375293 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:20:50.617625  375293 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:20:50.618175  375293 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:20:50.618746  375293 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:20:50.619222  375293 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:20:50.619315  375293 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:20:50.750595  375293 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:20:50.925827  375293 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:20:51.210091  375293 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:20:51.341979  375293 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:20:51.342383  375293 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:20:51.346252  375293 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:20:51.348515  375293 out.go:204]   - Booting up control plane ...
	I0108 22:20:51.348656  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:20:51.349029  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:20:51.350374  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:20:51.368778  375293 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:20:51.370050  375293 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:20:51.370127  375293 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:20:51.533956  375293 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:20:49.955240  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.455461  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.954656  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:51.455494  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:51.954708  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.454966  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.955643  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:53.454696  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:53.955234  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:54.455436  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.934792  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:55.433713  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:54.955090  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:55.454594  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:55.954634  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:56.455479  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:56.954866  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.455465  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.954857  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:58.454611  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:58.955416  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:59.455690  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.434365  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:59.932616  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:01.038928  375293 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503619 seconds
	I0108 22:21:01.039086  375293 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:21:01.066204  375293 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:21:01.633859  375293 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:21:01.634073  375293 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-903819 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:21:02.161422  375293 kubeadm.go:322] [bootstrap-token] Using token: m5gf05.lf63ehk148mqhzsy
	I0108 22:20:59.954870  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:00.455632  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:00.954611  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:01.455512  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:01.955058  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.130771  375205 kubeadm.go:1088] duration metric: took 13.474145806s to wait for elevateKubeSystemPrivileges.
	I0108 22:21:02.130812  375205 kubeadm.go:406] StartCluster complete in 5m13.930335887s
	I0108 22:21:02.130872  375205 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:02.131052  375205 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:21:02.133316  375205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:02.133620  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:21:02.133769  375205 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:21:02.133851  375205 addons.go:69] Setting storage-provisioner=true in profile "no-preload-675668"
	I0108 22:21:02.133874  375205 addons.go:237] Setting addon storage-provisioner=true in "no-preload-675668"
	W0108 22:21:02.133885  375205 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:21:02.133902  375205 addons.go:69] Setting default-storageclass=true in profile "no-preload-675668"
	I0108 22:21:02.133931  375205 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-675668"
	I0108 22:21:02.133944  375205 addons.go:69] Setting metrics-server=true in profile "no-preload-675668"
	I0108 22:21:02.133960  375205 addons.go:237] Setting addon metrics-server=true in "no-preload-675668"
	W0108 22:21:02.133970  375205 addons.go:246] addon metrics-server should already be in state true
	I0108 22:21:02.134007  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.133934  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.134493  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134492  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134531  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.133882  375205 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:21:02.134595  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134626  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.134679  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.159537  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0108 22:21:02.159560  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0108 22:21:02.159658  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0108 22:21:02.160218  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160310  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160353  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160816  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160832  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.160837  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160856  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.160923  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160934  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.161384  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161384  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161436  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161578  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.162110  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.162156  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.163070  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.163111  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.166373  375205 addons.go:237] Setting addon default-storageclass=true in "no-preload-675668"
	W0108 22:21:02.166398  375205 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:21:02.166437  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.166793  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.166851  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.186248  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0108 22:21:02.186805  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.187689  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.187721  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.189657  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.189934  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0108 22:21:02.190139  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.190885  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.192512  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.192561  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.192883  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.193058  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.193793  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.193846  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.194831  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0108 22:21:02.197130  375205 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:21:02.195453  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.198890  375205 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:02.198908  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:21:02.198928  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.199474  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.199496  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.202159  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.202458  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.204081  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.204440  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.204470  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.204907  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.205095  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.206369  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.206382  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.208865  375205 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:21:02.207548  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.210754  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:21:02.210777  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:21:02.210806  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.215494  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.216525  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.216572  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.217020  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.217270  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.217433  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.217548  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.218155  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0108 22:21:02.219031  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.219589  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.219613  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.220024  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.220222  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.223150  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.223618  375205 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:02.223638  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:21:02.223662  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.227537  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.228321  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.228364  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.228729  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.228986  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.229244  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.229385  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.376102  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:02.442186  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:21:02.442220  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:21:02.463490  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:02.511966  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:21:02.512007  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:21:02.516771  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:21:02.645916  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:02.645958  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:21:02.693299  375205 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-675668" context rescaled to 1 replicas
	I0108 22:21:02.693524  375205 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.153 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:21:02.696133  375205 out.go:177] * Verifying Kubernetes components...
	I0108 22:21:02.163532  375293 out.go:204]   - Configuring RBAC rules ...
	I0108 22:21:02.163667  375293 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:21:02.202175  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:21:02.230273  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:21:02.239237  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:21:02.245892  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:21:02.262139  375293 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:21:02.282319  375293 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:21:02.634155  375293 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:21:02.712856  375293 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:21:02.712895  375293 kubeadm.go:322] 
	I0108 22:21:02.713004  375293 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:21:02.713029  375293 kubeadm.go:322] 
	I0108 22:21:02.713122  375293 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:21:02.713138  375293 kubeadm.go:322] 
	I0108 22:21:02.713175  375293 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:21:02.713243  375293 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:21:02.713342  375293 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:21:02.713367  375293 kubeadm.go:322] 
	I0108 22:21:02.713461  375293 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:21:02.713491  375293 kubeadm.go:322] 
	I0108 22:21:02.713571  375293 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:21:02.713582  375293 kubeadm.go:322] 
	I0108 22:21:02.713672  375293 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:21:02.713775  375293 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:21:02.713903  375293 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:21:02.713916  375293 kubeadm.go:322] 
	I0108 22:21:02.714019  375293 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:21:02.714118  375293 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:21:02.714132  375293 kubeadm.go:322] 
	I0108 22:21:02.714275  375293 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m5gf05.lf63ehk148mqhzsy \
	I0108 22:21:02.714404  375293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:21:02.714427  375293 kubeadm.go:322] 	--control-plane 
	I0108 22:21:02.714439  375293 kubeadm.go:322] 
	I0108 22:21:02.714524  375293 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:21:02.714533  375293 kubeadm.go:322] 
	I0108 22:21:02.714623  375293 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m5gf05.lf63ehk148mqhzsy \
	I0108 22:21:02.714748  375293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:21:02.715538  375293 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:21:02.715812  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:21:02.715830  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:21:02.717948  375293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:21:02.719376  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:21:02.757728  375293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:21:02.792630  375293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:21:02.792734  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.792736  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=embed-certs-903819 minikube.k8s.io/updated_at=2024_01_08T22_21_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.697938  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:02.989011  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:03.814186  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437994456s)
	I0108 22:21:03.814254  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814255  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.350714909s)
	I0108 22:21:03.814286  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.297474579s)
	I0108 22:21:03.814302  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814321  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814317  375205 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0108 22:21:03.814318  375205 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.116341471s)
	I0108 22:21:03.814267  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814391  375205 node_ready.go:35] waiting up to 6m0s for node "no-preload-675668" to be "Ready" ...
	I0108 22:21:03.814667  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.814692  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.814734  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.814742  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.814765  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814789  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814821  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.814855  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.814868  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814878  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814994  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.815008  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.816606  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.816639  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.816649  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.844508  375205 node_ready.go:49] node "no-preload-675668" has status "Ready":"True"
	I0108 22:21:03.844562  375205 node_ready.go:38] duration metric: took 30.150881ms waiting for node "no-preload-675668" to be "Ready" ...
	I0108 22:21:03.844582  375205 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:03.895674  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.895707  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.896169  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.896196  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.896243  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.916148  375205 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-q6x86" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:04.208779  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.219716131s)
	I0108 22:21:04.208834  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:04.208853  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:04.209240  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:04.209262  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:04.209275  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:04.209289  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:04.209564  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:04.209585  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:04.209599  375205 addons.go:473] Verifying addon metrics-server=true in "no-preload-675668"
	I0108 22:21:04.211402  375205 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 22:21:04.212659  375205 addons.go:508] enable addons completed in 2.078891102s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 22:21:01.934579  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:03.936076  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:05.936317  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:03.317224  375293 ops.go:34] apiserver oom_adj: -16
	I0108 22:21:03.317384  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:03.817786  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:04.318579  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:04.817664  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.317487  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.818475  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:06.318507  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:06.818090  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:07.318335  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.932344  375205 pod_ready.go:92] pod "coredns-76f75df574-q6x86" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.932389  375205 pod_ready.go:81] duration metric: took 2.016206796s waiting for pod "coredns-76f75df574-q6x86" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.932404  375205 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.941282  375205 pod_ready.go:92] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.941316  375205 pod_ready.go:81] duration metric: took 8.903771ms waiting for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.941331  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.950226  375205 pod_ready.go:92] pod "kube-apiserver-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.950258  375205 pod_ready.go:81] duration metric: took 8.918375ms waiting for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.950273  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.972742  375205 pod_ready.go:92] pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.972794  375205 pod_ready.go:81] duration metric: took 22.511438ms waiting for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.972816  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b2nx2" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:06.981190  375205 pod_ready.go:92] pod "kube-proxy-b2nx2" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:06.981214  375205 pod_ready.go:81] duration metric: took 1.008391493s waiting for pod "kube-proxy-b2nx2" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:06.981225  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:07.121313  375205 pod_ready.go:92] pod "kube-scheduler-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:07.121348  375205 pod_ready.go:81] duration metric: took 140.114425ms waiting for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:07.121363  375205 pod_ready.go:38] duration metric: took 3.276764424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:07.121385  375205 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:21:07.121458  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:21:07.138313  375205 api_server.go:72] duration metric: took 4.444721115s to wait for apiserver process to appear ...
	I0108 22:21:07.138352  375205 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:21:07.138384  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:21:07.145653  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 200:
	ok
	I0108 22:21:07.148112  375205 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:21:07.148146  375205 api_server.go:131] duration metric: took 9.785033ms to wait for apiserver health ...
	I0108 22:21:07.148158  375205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:21:07.325218  375205 system_pods.go:59] 8 kube-system pods found
	I0108 22:21:07.325263  375205 system_pods.go:61] "coredns-76f75df574-q6x86" [6cad2e0f-a7af-453d-9eaf-55b56e41e27b] Running
	I0108 22:21:07.325268  375205 system_pods.go:61] "etcd-no-preload-675668" [cd434699-162a-4b04-853d-94dbb1254279] Running
	I0108 22:21:07.325273  375205 system_pods.go:61] "kube-apiserver-no-preload-675668" [d22859b8-f451-40b8-85d7-7f3d548b1af1] Running
	I0108 22:21:07.325279  375205 system_pods.go:61] "kube-controller-manager-no-preload-675668" [8b52fdfe-124a-4d08-b66b-41f1b051fe95] Running
	I0108 22:21:07.325283  375205 system_pods.go:61] "kube-proxy-b2nx2" [b6106f11-9345-4915-b7cc-d2671a7c4e72] Running
	I0108 22:21:07.325287  375205 system_pods.go:61] "kube-scheduler-no-preload-675668" [83562817-27bf-4265-88f0-3dad667687c5] Running
	I0108 22:21:07.325296  375205 system_pods.go:61] "metrics-server-57f55c9bc5-vb2kj" [45489720-2506-46fa-8833-02cbae6f122b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:21:07.325305  375205 system_pods.go:61] "storage-provisioner" [a1c64608-a169-455b-a5e9-0ecb4161432c] Running
	I0108 22:21:07.325323  375205 system_pods.go:74] duration metric: took 177.156331ms to wait for pod list to return data ...
	I0108 22:21:07.325337  375205 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:21:07.521751  375205 default_sa.go:45] found service account: "default"
	I0108 22:21:07.521796  375205 default_sa.go:55] duration metric: took 196.444982ms for default service account to be created ...
	I0108 22:21:07.521809  375205 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:21:07.725848  375205 system_pods.go:86] 8 kube-system pods found
	I0108 22:21:07.725888  375205 system_pods.go:89] "coredns-76f75df574-q6x86" [6cad2e0f-a7af-453d-9eaf-55b56e41e27b] Running
	I0108 22:21:07.725894  375205 system_pods.go:89] "etcd-no-preload-675668" [cd434699-162a-4b04-853d-94dbb1254279] Running
	I0108 22:21:07.725899  375205 system_pods.go:89] "kube-apiserver-no-preload-675668" [d22859b8-f451-40b8-85d7-7f3d548b1af1] Running
	I0108 22:21:07.725904  375205 system_pods.go:89] "kube-controller-manager-no-preload-675668" [8b52fdfe-124a-4d08-b66b-41f1b051fe95] Running
	I0108 22:21:07.725908  375205 system_pods.go:89] "kube-proxy-b2nx2" [b6106f11-9345-4915-b7cc-d2671a7c4e72] Running
	I0108 22:21:07.725913  375205 system_pods.go:89] "kube-scheduler-no-preload-675668" [83562817-27bf-4265-88f0-3dad667687c5] Running
	I0108 22:21:07.725920  375205 system_pods.go:89] "metrics-server-57f55c9bc5-vb2kj" [45489720-2506-46fa-8833-02cbae6f122b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:21:07.725926  375205 system_pods.go:89] "storage-provisioner" [a1c64608-a169-455b-a5e9-0ecb4161432c] Running
	I0108 22:21:07.725937  375205 system_pods.go:126] duration metric: took 204.121913ms to wait for k8s-apps to be running ...
	I0108 22:21:07.725946  375205 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:21:07.726014  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:07.745719  375205 system_svc.go:56] duration metric: took 19.7558ms WaitForService to wait for kubelet.
	I0108 22:21:07.745762  375205 kubeadm.go:581] duration metric: took 5.052181219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:21:07.745787  375205 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:21:07.923051  375205 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:21:07.923108  375205 node_conditions.go:123] node cpu capacity is 2
	I0108 22:21:07.923124  375205 node_conditions.go:105] duration metric: took 177.330669ms to run NodePressure ...
	I0108 22:21:07.923140  375205 start.go:228] waiting for startup goroutines ...
	I0108 22:21:07.923150  375205 start.go:233] waiting for cluster config update ...
	I0108 22:21:07.923164  375205 start.go:242] writing updated cluster config ...
	I0108 22:21:07.923585  375205 ssh_runner.go:195] Run: rm -f paused
	I0108 22:21:07.985436  375205 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0108 22:21:07.987522  375205 out.go:177] * Done! kubectl is now configured to use "no-preload-675668" cluster and "default" namespace by default
	I0108 22:21:07.936490  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:10.434333  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:07.817734  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:08.318472  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:08.818320  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:09.317791  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:09.818298  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:10.317739  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:10.818233  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:11.317545  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:11.818344  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:12.317620  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:12.817911  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:13.317976  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:13.817670  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:14.317747  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:14.817596  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:15.318339  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:15.465438  375293 kubeadm.go:1088] duration metric: took 12.672788245s to wait for elevateKubeSystemPrivileges.
	I0108 22:21:15.465476  375293 kubeadm.go:406] StartCluster complete in 5m14.917822837s
	I0108 22:21:15.465503  375293 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:15.465612  375293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:21:15.468437  375293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:15.468772  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:21:15.468921  375293 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:21:15.469008  375293 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-903819"
	I0108 22:21:15.469017  375293 addons.go:69] Setting default-storageclass=true in profile "embed-certs-903819"
	I0108 22:21:15.469036  375293 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-903819"
	I0108 22:21:15.469052  375293 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 22:21:15.469064  375293 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:21:15.469060  375293 addons.go:69] Setting metrics-server=true in profile "embed-certs-903819"
	I0108 22:21:15.469037  375293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-903819"
	I0108 22:21:15.469111  375293 addons.go:237] Setting addon metrics-server=true in "embed-certs-903819"
	W0108 22:21:15.469128  375293 addons.go:246] addon metrics-server should already be in state true
	I0108 22:21:15.469139  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.469189  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.469584  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469635  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469676  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.469647  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.469585  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469825  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.488818  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0108 22:21:15.489266  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.491196  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39101
	I0108 22:21:15.491253  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0108 22:21:15.491759  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.491787  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.491816  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.492193  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.492365  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.492383  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.492747  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.492790  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.493002  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.493056  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.493670  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.493702  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.494305  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.494329  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.494841  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.495072  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.499830  375293 addons.go:237] Setting addon default-storageclass=true in "embed-certs-903819"
	W0108 22:21:15.499867  375293 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:21:15.499903  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.500396  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.500568  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.516135  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0108 22:21:15.516748  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.517517  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.517566  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.518117  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.518378  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.519282  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0108 22:21:15.520505  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.520596  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.522491  375293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:21:15.521662  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.524042  375293 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:15.524051  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.524059  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:21:15.524081  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.524560  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.524774  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.527237  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.529443  375293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:21:15.528147  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.528787  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.531192  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:21:15.531217  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:21:15.531249  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.531217  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.531343  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.531599  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.531825  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.532078  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.535903  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0108 22:21:15.536161  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.536527  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.536553  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.536618  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.536766  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.536994  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.537194  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.537359  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.537370  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.537426  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.537948  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.538486  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.538508  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.557562  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0108 22:21:15.558072  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.558613  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.558643  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.559096  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.559318  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.561435  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.561769  375293 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:15.561788  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:21:15.561809  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.564959  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.565410  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.565442  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.565628  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.565836  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.565994  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.566145  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.740070  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:21:15.740112  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:21:15.762954  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:15.779320  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:15.819423  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:21:15.821997  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:21:15.822039  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:21:15.911195  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:15.911231  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:21:16.022419  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:16.061550  375293 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-903819" context rescaled to 1 replicas
	I0108 22:21:16.061625  375293 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:21:16.063813  375293 out.go:177] * Verifying Kubernetes components...
	I0108 22:21:12.435066  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:14.936374  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:16.065433  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:17.600634  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.837630321s)
	I0108 22:21:17.600727  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.600751  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.601111  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.601133  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:17.601145  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.601155  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.601162  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.601437  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.601478  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.601496  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:17.658136  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.658160  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.658512  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.658539  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.658556  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.633155  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.813676374s)
	I0108 22:21:18.633329  375293 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0108 22:21:18.633460  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.610999344s)
	I0108 22:21:18.633535  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.633576  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.633728  375293 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.568262314s)
	I0108 22:21:18.633793  375293 node_ready.go:35] waiting up to 6m0s for node "embed-certs-903819" to be "Ready" ...
	I0108 22:21:18.634123  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.634212  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.634247  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.634274  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.634293  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.634767  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.634836  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.634875  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.634901  375293 addons.go:473] Verifying addon metrics-server=true in "embed-certs-903819"
	I0108 22:21:18.638741  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.85936832s)
	I0108 22:21:18.638810  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.638826  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.639227  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.639301  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.639322  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.639333  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.639353  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.639611  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.639643  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.639652  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.641291  375293 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0108 22:21:17.433629  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:19.436354  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:18.642785  375293 addons.go:508] enable addons completed in 3.173862498s: enabled=[default-storageclass metrics-server storage-provisioner]
	I0108 22:21:18.710469  375293 node_ready.go:49] node "embed-certs-903819" has status "Ready":"True"
	I0108 22:21:18.710510  375293 node_ready.go:38] duration metric: took 76.686364ms waiting for node "embed-certs-903819" to be "Ready" ...
	I0108 22:21:18.710526  375293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:18.737405  375293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.747084  375293 pod_ready.go:92] pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.747120  375293 pod_ready.go:81] duration metric: took 1.009672279s waiting for pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.747136  375293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.758191  375293 pod_ready.go:92] pod "etcd-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.758217  375293 pod_ready.go:81] duration metric: took 11.073973ms waiting for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.758227  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.770167  375293 pod_ready.go:92] pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.770199  375293 pod_ready.go:81] duration metric: took 11.962809ms waiting for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.770213  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.778549  375293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.778576  375293 pod_ready.go:81] duration metric: took 8.355574ms waiting for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.778593  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqj9b" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.291841  375293 pod_ready.go:92] pod "kube-proxy-hqj9b" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:20.291889  375293 pod_ready.go:81] duration metric: took 513.287335ms waiting for pod "kube-proxy-hqj9b" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.291907  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.639437  375293 pod_ready.go:92] pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:20.639482  375293 pod_ready.go:81] duration metric: took 347.563689ms waiting for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.639507  375293 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:22.648411  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:21.933418  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:24.435043  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:25.150951  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:27.650444  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:26.937451  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:27.925059  375556 pod_ready.go:81] duration metric: took 4m0.000207907s waiting for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" ...
	E0108 22:21:27.925103  375556 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:21:27.925128  375556 pod_ready.go:38] duration metric: took 4m40.430696194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:27.925167  375556 kubeadm.go:640] restartCluster took 5m4.814420494s
	W0108 22:21:27.925297  375556 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:21:27.925360  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:21:30.149112  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:32.149588  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:34.150894  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:36.649733  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:39.151257  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:41.647739  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:43.145693  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.220300874s)
	I0108 22:21:43.145789  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:43.162489  375556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:21:43.174147  375556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:21:43.184922  375556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:21:43.184985  375556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:21:43.249215  375556 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:21:43.249349  375556 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:21:43.441703  375556 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:21:43.441851  375556 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:21:43.441998  375556 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:21:43.739390  375556 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:21:43.742109  375556 out.go:204]   - Generating certificates and keys ...
	I0108 22:21:43.742213  375556 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:21:43.742298  375556 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:21:43.742469  375556 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:21:43.742561  375556 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:21:43.742651  375556 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:21:43.743428  375556 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:21:43.744699  375556 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:21:43.746015  375556 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:21:43.747206  375556 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:21:43.748318  375556 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:21:43.749156  375556 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:21:43.749237  375556 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:21:43.859844  375556 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:21:44.418300  375556 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:21:44.582066  375556 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:21:44.829395  375556 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:21:44.830276  375556 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:21:44.833494  375556 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:21:44.835724  375556 out.go:204]   - Booting up control plane ...
	I0108 22:21:44.835871  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:21:44.835997  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:21:44.836115  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:21:44.858575  375556 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:21:44.859658  375556 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:21:44.859774  375556 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:21:45.004925  375556 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:21:43.648821  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:46.148491  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:48.152137  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:50.649779  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:54.508960  375556 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503706 seconds
	I0108 22:21:54.509100  375556 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:21:54.534526  375556 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:21:55.088263  375556 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:21:55.088497  375556 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-292054 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:21:55.625246  375556 kubeadm.go:322] [bootstrap-token] Using token: ca3oft.99pjh791kq903kea
	I0108 22:21:55.627406  375556 out.go:204]   - Configuring RBAC rules ...
	I0108 22:21:55.627535  375556 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:21:55.635469  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:21:55.658589  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:21:55.664394  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:21:55.670923  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:21:55.678315  375556 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:21:55.707544  375556 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:21:56.011289  375556 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:21:56.074068  375556 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:21:56.074122  375556 kubeadm.go:322] 
	I0108 22:21:56.074195  375556 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:21:56.074210  375556 kubeadm.go:322] 
	I0108 22:21:56.074305  375556 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:21:56.074315  375556 kubeadm.go:322] 
	I0108 22:21:56.074346  375556 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:21:56.074474  375556 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:21:56.074550  375556 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:21:56.074560  375556 kubeadm.go:322] 
	I0108 22:21:56.074635  375556 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:21:56.074649  375556 kubeadm.go:322] 
	I0108 22:21:56.074713  375556 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:21:56.074723  375556 kubeadm.go:322] 
	I0108 22:21:56.074810  375556 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:21:56.074933  375556 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:21:56.075027  375556 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:21:56.075037  375556 kubeadm.go:322] 
	I0108 22:21:56.075161  375556 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:21:56.075285  375556 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:21:56.075295  375556 kubeadm.go:322] 
	I0108 22:21:56.075430  375556 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ca3oft.99pjh791kq903kea \
	I0108 22:21:56.075574  375556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:21:56.075612  375556 kubeadm.go:322] 	--control-plane 
	I0108 22:21:56.075621  375556 kubeadm.go:322] 
	I0108 22:21:56.075733  375556 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:21:56.075744  375556 kubeadm.go:322] 
	I0108 22:21:56.075843  375556 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ca3oft.99pjh791kq903kea \
	I0108 22:21:56.075969  375556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:21:56.076235  375556 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:21:56.076281  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:21:56.076299  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:21:56.078385  375556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:21:56.079942  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:21:53.149618  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:55.649585  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:57.650103  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:56.112245  375556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:21:56.183435  375556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:21:56.183568  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:56.183570  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=default-k8s-diff-port-292054 minikube.k8s.io/updated_at=2024_01_08T22_21_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:56.217296  375556 ops.go:34] apiserver oom_adj: -16
	I0108 22:21:56.721884  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:57.222982  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:57.722219  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:58.222712  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:58.722544  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:59.222082  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:59.722808  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.222562  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.722284  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.149913  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:02.650967  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:01.222401  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:01.722606  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:02.222313  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:02.722582  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:03.222793  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:03.722359  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:04.222245  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:04.722706  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.222841  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.722871  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.148941  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:07.149461  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:06.222648  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:06.722581  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:07.222288  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:07.722274  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.222744  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.722856  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.963467  375556 kubeadm.go:1088] duration metric: took 12.779973028s to wait for elevateKubeSystemPrivileges.
	I0108 22:22:08.963522  375556 kubeadm.go:406] StartCluster complete in 5m45.912753673s
	I0108 22:22:08.963553  375556 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:22:08.963665  375556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:22:08.966435  375556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:22:08.966775  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:22:08.966928  375556 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:22:08.967034  375556 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967075  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:22:08.967095  375556 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.967104  375556 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:22:08.967152  375556 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967183  375556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-292054"
	I0108 22:22:08.967192  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.967271  375556 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967300  375556 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.967310  375556 addons.go:246] addon metrics-server should already be in state true
	I0108 22:22:08.967375  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.967667  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967695  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.967756  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967769  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967779  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.967796  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.986925  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0108 22:22:08.987023  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0108 22:22:08.987549  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.987698  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.988282  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.988313  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.988483  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.988508  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.988606  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0108 22:22:08.989056  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.989111  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.989337  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:08.989834  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.989872  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.990158  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.990780  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.990796  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.991245  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.991880  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.991911  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.995239  375556 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.995265  375556 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:22:08.995290  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.995820  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.995865  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:09.011939  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0108 22:22:09.012468  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.013299  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.013318  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.013724  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.013935  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I0108 22:22:09.014168  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.014906  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.015481  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.015498  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.015842  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.016396  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:09.016424  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:09.016659  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.016741  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I0108 22:22:09.019481  375556 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:22:09.017701  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.021632  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:22:09.021669  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:22:09.021704  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.022354  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.022387  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.022852  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.023158  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.025362  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.027347  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.029567  375556 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:22:09.027877  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.028367  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.032055  375556 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:22:09.032070  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:22:09.032103  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.032160  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.032368  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.032489  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.032591  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.037266  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.037969  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.038003  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.038588  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.038650  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0108 22:22:09.038933  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.039112  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.039299  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.039313  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.039936  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.039974  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.040395  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.040652  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.042584  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.043735  375556 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:22:09.043754  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:22:09.043774  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.047511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.047647  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.047668  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.047828  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.048115  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.048267  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.048432  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.273503  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:22:09.286359  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:22:09.286398  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:22:09.395127  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:22:09.395521  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:22:09.399318  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:22:09.399351  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:22:09.529413  375556 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-292054" context rescaled to 1 replicas
	I0108 22:22:09.529456  375556 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:22:09.531970  375556 out.go:177] * Verifying Kubernetes components...
	I0108 22:22:09.533935  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:22:09.608669  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:22:09.608706  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:22:09.762095  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:22:11.642700  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369133486s)
	I0108 22:22:11.642752  375556 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0108 22:22:12.525251  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.130061811s)
	I0108 22:22:12.525333  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525335  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.129764757s)
	I0108 22:22:12.525352  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.525383  375556 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.99138928s)
	I0108 22:22:12.525439  375556 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-292054" to be "Ready" ...
	I0108 22:22:12.525390  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.525785  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.525799  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.525810  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525820  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.526200  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526208  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526224  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.526234  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.526244  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.526250  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526320  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526345  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.526627  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526640  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526644  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.600599  375556 node_ready.go:49] node "default-k8s-diff-port-292054" has status "Ready":"True"
	I0108 22:22:12.600630  375556 node_ready.go:38] duration metric: took 75.170013ms waiting for node "default-k8s-diff-port-292054" to be "Ready" ...
	I0108 22:22:12.600642  375556 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:22:12.607695  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.607735  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.608178  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.608205  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.698479  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.93630517s)
	I0108 22:22:12.698597  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.698624  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.699090  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.699114  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.699129  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.699141  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.699570  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.699611  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.699628  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.699642  375556 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-292054"
	I0108 22:22:12.702579  375556 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 22:22:09.152248  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:11.649021  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:12.704051  375556 addons.go:508] enable addons completed in 3.737129591s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 22:22:12.730733  375556 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.740214  375556 pod_ready.go:92] pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.740241  375556 pod_ready.go:81] duration metric: took 1.009466865s waiting for pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.740252  375556 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.749855  375556 pod_ready.go:92] pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.749884  375556 pod_ready.go:81] duration metric: took 9.624914ms waiting for pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.749897  375556 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.774037  375556 pod_ready.go:92] pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.774082  375556 pod_ready.go:81] duration metric: took 24.173765ms waiting for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.774099  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.793737  375556 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.793763  375556 pod_ready.go:81] duration metric: took 19.654354ms waiting for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.793786  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.802646  375556 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.802675  375556 pod_ready.go:81] duration metric: took 8.880262ms waiting for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.802686  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bwmkb" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:14.935671  375556 pod_ready.go:92] pod "kube-proxy-bwmkb" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:14.935701  375556 pod_ready.go:81] duration metric: took 1.133008415s waiting for pod "kube-proxy-bwmkb" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:14.935712  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:15.337751  375556 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:15.337785  375556 pod_ready.go:81] duration metric: took 402.065003ms waiting for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:15.337799  375556 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.651032  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:16.150676  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:17.347997  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:19.848727  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:18.651581  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:21.153888  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:22.348002  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:24.348563  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:23.159095  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:25.648575  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:27.650462  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:26.847900  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:28.848176  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:30.148277  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:32.148917  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:31.353639  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:33.847750  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:34.649869  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:36.650396  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:36.349185  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:38.846642  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:40.851501  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:39.148741  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:41.150479  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:43.348737  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:45.848448  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:43.649911  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:46.149760  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:48.348731  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:50.849503  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:48.648402  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:50.649986  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:53.349307  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:55.349864  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:53.152397  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:55.651270  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:57.652287  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:57.854209  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:00.347211  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:59.655447  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:02.151802  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:02.351659  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:04.848930  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:04.650649  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:07.148845  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:06.864466  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:09.349319  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:09.150267  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:11.647897  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:11.350470  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:13.846976  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:13.648246  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:15.653072  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:16.348755  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:18.847624  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:20.850947  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:18.147230  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:20.148799  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:22.150181  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:22.854027  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:25.347172  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:24.648528  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:26.650104  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:27.350880  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:29.847065  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:28.651914  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:31.149983  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:31.849609  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:33.849918  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:35.852770  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:33.648054  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:35.650693  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:38.346376  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:40.347831  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:38.148131  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:40.149293  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:42.151041  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:42.845779  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:44.849417  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:44.655548  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:47.150423  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:46.850811  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:49.347304  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:49.652923  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:52.149820  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:51.348180  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:53.846474  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:55.847511  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:54.649820  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:57.149372  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:57.849233  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:00.348798  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:59.154056  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:01.649087  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:02.349247  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:04.350582  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:03.650176  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:06.153560  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:06.848567  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:09.349670  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:08.649461  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:11.149266  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:11.847194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:13.847282  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:15.849466  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:13.650152  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:15.653477  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:17.849683  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:20.348186  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:18.150536  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:20.650961  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:22.849232  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:25.349020  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:23.149893  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:25.151776  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:27.649498  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:27.848253  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:29.849644  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:29.651074  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:32.151463  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:32.348246  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:34.349140  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:34.650582  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:36.651676  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:36.848220  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:38.848664  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:40.848971  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:39.152183  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:41.648320  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:42.849338  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:45.347960  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:44.150739  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:46.649332  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:47.350030  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:49.847947  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:48.650293  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:50.650602  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:52.344857  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:54.347419  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:53.149776  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:55.150342  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:57.648269  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:56.347866  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:58.350081  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:00.848175  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:59.650591  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:02.149598  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:03.349797  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:05.849888  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:04.648771  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:06.651847  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:08.346160  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:10.348673  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:09.149033  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:11.149301  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:12.352279  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:14.846849  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:13.153318  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:15.651109  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:16.849657  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:19.347996  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:18.150751  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:20.650211  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:20.650242  375293 pod_ready.go:81] duration metric: took 4m0.010726332s waiting for pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace to be "Ready" ...
	E0108 22:25:20.650252  375293 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 22:25:20.650259  375293 pod_ready.go:38] duration metric: took 4m1.939720475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:25:20.650300  375293 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:25:20.650336  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:20.650406  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:20.714451  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:20.714500  375293 cri.go:89] found id: ""
	I0108 22:25:20.714513  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:20.714621  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.720237  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:20.720367  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:20.767857  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:20.767904  375293 cri.go:89] found id: ""
	I0108 22:25:20.767916  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:20.767995  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.772859  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:20.772969  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:20.817193  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:20.817225  375293 cri.go:89] found id: ""
	I0108 22:25:20.817236  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:20.817310  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.824003  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:20.824113  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:20.884204  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:20.884252  375293 cri.go:89] found id: ""
	I0108 22:25:20.884263  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:20.884335  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.889658  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:20.889756  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:20.949423  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:20.949460  375293 cri.go:89] found id: ""
	I0108 22:25:20.949472  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:20.949543  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.954856  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:20.954944  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:21.011490  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:21.011538  375293 cri.go:89] found id: ""
	I0108 22:25:21.011551  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:21.011629  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:21.017544  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:21.017638  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:21.066267  375293 cri.go:89] found id: ""
	I0108 22:25:21.066310  375293 logs.go:284] 0 containers: []
	W0108 22:25:21.066322  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:21.066331  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:21.066404  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:21.123537  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:21.123571  375293 cri.go:89] found id: ""
	I0108 22:25:21.123583  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:21.123660  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:21.129269  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:21.129309  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:21.200266  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:21.200308  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:21.246669  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:21.246705  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:21.265861  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:21.265908  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:21.327968  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:21.328016  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:21.386940  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:21.386986  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:21.443896  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:21.443941  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:21.496699  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:21.496746  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:21.962773  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:21.962820  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:22.024288  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:22.024330  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:22.133928  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:22.133976  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:22.301006  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:22.301051  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:21.348655  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:23.350759  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:25.351301  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:24.847470  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:25:24.867718  375293 api_server.go:72] duration metric: took 4m8.80605206s to wait for apiserver process to appear ...
	I0108 22:25:24.867750  375293 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:25:24.867788  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:24.867842  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:24.918048  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:24.918090  375293 cri.go:89] found id: ""
	I0108 22:25:24.918104  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:24.918196  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:24.923984  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:24.924096  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:24.981033  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:24.981058  375293 cri.go:89] found id: ""
	I0108 22:25:24.981066  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:24.981116  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:24.985729  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:24.985802  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:25.038522  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:25.038558  375293 cri.go:89] found id: ""
	I0108 22:25:25.038570  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:25.038637  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.043106  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:25.043218  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:25.100189  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:25.100218  375293 cri.go:89] found id: ""
	I0108 22:25:25.100230  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:25.100298  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.107135  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:25.107252  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:25.155243  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:25.155276  375293 cri.go:89] found id: ""
	I0108 22:25:25.155288  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:25.155354  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.160457  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:25.160559  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:25.214754  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:25.214788  375293 cri.go:89] found id: ""
	I0108 22:25:25.214799  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:25.214855  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.219504  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:25.219595  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:25.267255  375293 cri.go:89] found id: ""
	I0108 22:25:25.267302  375293 logs.go:284] 0 containers: []
	W0108 22:25:25.267318  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:25.267329  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:25.267442  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:25.322636  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:25.322668  375293 cri.go:89] found id: ""
	I0108 22:25:25.322679  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:25.322750  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.327559  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:25.327592  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:25.396299  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:25.396354  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:25.447121  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:25.447188  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:25.501357  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:25.501413  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:25.572678  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:25.572741  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:25.624203  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:25.624248  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:26.021189  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:26.021250  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:26.122845  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:26.122893  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:26.297704  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:26.297746  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:26.361771  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:26.361826  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:26.422252  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:26.422292  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:26.479602  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:26.479641  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:27.848906  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:30.348452  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:28.997002  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:25:29.008040  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I0108 22:25:29.009729  375293 api_server.go:141] control plane version: v1.28.4
	I0108 22:25:29.009758  375293 api_server.go:131] duration metric: took 4.142001296s to wait for apiserver health ...
	I0108 22:25:29.009770  375293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:25:29.009807  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:29.009872  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:29.064244  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:29.064280  375293 cri.go:89] found id: ""
	I0108 22:25:29.064292  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:29.064357  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.069801  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:29.069900  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:29.115294  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:29.115328  375293 cri.go:89] found id: ""
	I0108 22:25:29.115338  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:29.115426  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.120512  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:29.120600  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:29.173571  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:29.173600  375293 cri.go:89] found id: ""
	I0108 22:25:29.173609  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:29.173670  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.179649  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:29.179724  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:29.230220  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:29.230272  375293 cri.go:89] found id: ""
	I0108 22:25:29.230286  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:29.230384  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.235437  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:29.235540  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:29.280861  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:29.280892  375293 cri.go:89] found id: ""
	I0108 22:25:29.280904  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:29.280974  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.286131  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:29.286247  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:29.337665  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:29.337700  375293 cri.go:89] found id: ""
	I0108 22:25:29.337711  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:29.337765  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.343912  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:29.344009  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:29.400428  375293 cri.go:89] found id: ""
	I0108 22:25:29.400458  375293 logs.go:284] 0 containers: []
	W0108 22:25:29.400466  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:29.400476  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:29.400532  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:29.458375  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:29.458416  375293 cri.go:89] found id: ""
	I0108 22:25:29.458428  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:29.458503  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.464513  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:29.464555  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:29.809503  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:29.809550  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:29.916786  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:29.916864  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:30.077876  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:30.077929  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:30.139380  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:30.139445  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:30.186829  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:30.186861  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:30.244185  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:30.244230  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:30.300429  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:30.300488  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:30.316880  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:30.316920  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:30.370537  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:30.370581  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:30.419043  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:30.419093  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:30.482758  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:30.482804  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:33.043083  375293 system_pods.go:59] 8 kube-system pods found
	I0108 22:25:33.043134  375293 system_pods.go:61] "coredns-5dd5756b68-jbz6n" [562faf84-b986-4f0e-97cd-41aa5ac7ea17] Running
	I0108 22:25:33.043139  375293 system_pods.go:61] "etcd-embed-certs-903819" [68146164-7115-4489-8010-32774433564a] Running
	I0108 22:25:33.043143  375293 system_pods.go:61] "kube-apiserver-embed-certs-903819" [367d0612-bd4d-448f-84f2-118afcb9d095] Running
	I0108 22:25:33.043148  375293 system_pods.go:61] "kube-controller-manager-embed-certs-903819" [43c3944a-3dfd-44ce-ba68-baebbced4406] Running
	I0108 22:25:33.043152  375293 system_pods.go:61] "kube-proxy-hqj9b" [14b3f3bd-1d65-4382-adc2-09344b54463d] Running
	I0108 22:25:33.043157  375293 system_pods.go:61] "kube-scheduler-embed-certs-903819" [9c004a9c-c77a-4ee5-970d-db41ddc26439] Running
	I0108 22:25:33.043167  375293 system_pods.go:61] "metrics-server-57f55c9bc5-qhjlv" [f1bff39b-c944-4de0-a5b8-eb239e91c6db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:25:33.043172  375293 system_pods.go:61] "storage-provisioner" [949c6275-6836-4035-89f5-f2d2c2caaa89] Running
	I0108 22:25:33.043180  375293 system_pods.go:74] duration metric: took 4.033402969s to wait for pod list to return data ...
	I0108 22:25:33.043189  375293 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:25:33.047488  375293 default_sa.go:45] found service account: "default"
	I0108 22:25:33.047526  375293 default_sa.go:55] duration metric: took 4.328925ms for default service account to be created ...
	I0108 22:25:33.047540  375293 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:25:33.055793  375293 system_pods.go:86] 8 kube-system pods found
	I0108 22:25:33.055824  375293 system_pods.go:89] "coredns-5dd5756b68-jbz6n" [562faf84-b986-4f0e-97cd-41aa5ac7ea17] Running
	I0108 22:25:33.055829  375293 system_pods.go:89] "etcd-embed-certs-903819" [68146164-7115-4489-8010-32774433564a] Running
	I0108 22:25:33.055834  375293 system_pods.go:89] "kube-apiserver-embed-certs-903819" [367d0612-bd4d-448f-84f2-118afcb9d095] Running
	I0108 22:25:33.055838  375293 system_pods.go:89] "kube-controller-manager-embed-certs-903819" [43c3944a-3dfd-44ce-ba68-baebbced4406] Running
	I0108 22:25:33.055841  375293 system_pods.go:89] "kube-proxy-hqj9b" [14b3f3bd-1d65-4382-adc2-09344b54463d] Running
	I0108 22:25:33.055845  375293 system_pods.go:89] "kube-scheduler-embed-certs-903819" [9c004a9c-c77a-4ee5-970d-db41ddc26439] Running
	I0108 22:25:33.055852  375293 system_pods.go:89] "metrics-server-57f55c9bc5-qhjlv" [f1bff39b-c944-4de0-a5b8-eb239e91c6db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:25:33.055859  375293 system_pods.go:89] "storage-provisioner" [949c6275-6836-4035-89f5-f2d2c2caaa89] Running
	I0108 22:25:33.055872  375293 system_pods.go:126] duration metric: took 8.323722ms to wait for k8s-apps to be running ...
	I0108 22:25:33.055881  375293 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:25:33.055939  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:25:33.074598  375293 system_svc.go:56] duration metric: took 18.695286ms WaitForService to wait for kubelet.
	I0108 22:25:33.074637  375293 kubeadm.go:581] duration metric: took 4m17.012976103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:25:33.074671  375293 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:25:33.079188  375293 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:25:33.079227  375293 node_conditions.go:123] node cpu capacity is 2
	I0108 22:25:33.079246  375293 node_conditions.go:105] duration metric: took 4.559946ms to run NodePressure ...
	I0108 22:25:33.079261  375293 start.go:228] waiting for startup goroutines ...
	I0108 22:25:33.079270  375293 start.go:233] waiting for cluster config update ...
	I0108 22:25:33.079283  375293 start.go:242] writing updated cluster config ...
	I0108 22:25:33.079792  375293 ssh_runner.go:195] Run: rm -f paused
	I0108 22:25:33.144148  375293 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:25:33.146897  375293 out.go:177] * Done! kubectl is now configured to use "embed-certs-903819" cluster and "default" namespace by default
	I0108 22:25:32.349693  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:34.845955  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:36.851909  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:39.348575  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:41.350957  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:43.848565  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:46.348360  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:48.847346  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:51.346764  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:53.849331  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:56.349683  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:58.350457  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:00.847803  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:03.352522  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:05.844769  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:07.846346  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:09.848453  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:11.850250  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:14.347576  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:15.349616  375556 pod_ready.go:81] duration metric: took 4m0.011802861s waiting for pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace to be "Ready" ...
	E0108 22:26:15.349643  375556 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 22:26:15.349651  375556 pod_ready.go:38] duration metric: took 4m2.748998751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:26:15.349666  375556 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:26:15.349720  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:15.349773  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:15.414233  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:15.414273  375556 cri.go:89] found id: ""
	I0108 22:26:15.414286  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:15.414367  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.421348  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:15.421439  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:15.480484  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:15.480508  375556 cri.go:89] found id: ""
	I0108 22:26:15.480517  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:15.480569  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.486049  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:15.486125  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:15.551549  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:15.551588  375556 cri.go:89] found id: ""
	I0108 22:26:15.551600  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:15.551665  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.556950  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:15.557035  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:15.607375  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:15.607417  375556 cri.go:89] found id: ""
	I0108 22:26:15.607433  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:15.607530  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.613182  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:15.613253  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:15.663780  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:15.663805  375556 cri.go:89] found id: ""
	I0108 22:26:15.663813  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:15.663882  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.668629  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:15.668748  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:15.722341  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:15.722370  375556 cri.go:89] found id: ""
	I0108 22:26:15.722380  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:15.722453  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.727974  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:15.728089  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:15.782298  375556 cri.go:89] found id: ""
	I0108 22:26:15.782331  375556 logs.go:284] 0 containers: []
	W0108 22:26:15.782349  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:15.782358  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:15.782436  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:15.836150  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:15.836194  375556 cri.go:89] found id: ""
	I0108 22:26:15.836207  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:15.836307  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.842152  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:15.842184  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:15.900314  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:15.900378  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:15.974860  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:15.974903  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:16.021465  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:16.021529  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:16.477647  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:16.477706  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:16.588562  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:16.588615  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:16.604310  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:16.604383  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:16.770738  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:16.770778  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:16.835271  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:16.835320  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:16.899297  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:16.899354  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:16.957508  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:16.957549  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:17.001214  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:17.001255  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:19.561271  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:26:19.578731  375556 api_server.go:72] duration metric: took 4m10.049236985s to wait for apiserver process to appear ...
	I0108 22:26:19.578768  375556 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:26:19.578821  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:19.578897  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:19.630380  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:19.630410  375556 cri.go:89] found id: ""
	I0108 22:26:19.630422  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:19.630496  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.635902  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:19.635998  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:19.682023  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:19.682057  375556 cri.go:89] found id: ""
	I0108 22:26:19.682072  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:19.682143  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.688443  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:19.688567  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:19.738612  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:19.738651  375556 cri.go:89] found id: ""
	I0108 22:26:19.738664  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:19.738790  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.745590  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:19.745726  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:19.796647  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:19.796674  375556 cri.go:89] found id: ""
	I0108 22:26:19.796685  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:19.796747  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.801789  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:19.801872  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:19.846026  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:19.846060  375556 cri.go:89] found id: ""
	I0108 22:26:19.846070  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:19.846150  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.851227  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:19.851299  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:19.906135  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:19.906173  375556 cri.go:89] found id: ""
	I0108 22:26:19.906184  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:19.906267  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.911914  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:19.912048  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:19.960064  375556 cri.go:89] found id: ""
	I0108 22:26:19.960104  375556 logs.go:284] 0 containers: []
	W0108 22:26:19.960117  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:19.960126  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:19.960198  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:20.010136  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:20.010171  375556 cri.go:89] found id: ""
	I0108 22:26:20.010181  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:20.010256  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:20.015368  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:20.015402  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:20.122508  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:20.122575  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:20.272565  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:20.272610  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:20.335281  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:20.335334  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:20.384028  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:20.384088  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:20.779192  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:20.779250  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:20.795137  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:20.795170  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:20.863312  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:20.863395  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:20.918084  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:20.918132  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:20.966066  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:20.966108  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:21.030610  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:21.030704  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:21.083525  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:21.083567  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:23.662287  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:26:23.671857  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 200:
	ok
	I0108 22:26:23.673883  375556 api_server.go:141] control plane version: v1.28.4
	I0108 22:26:23.673919  375556 api_server.go:131] duration metric: took 4.095141482s to wait for apiserver health ...
	I0108 22:26:23.673932  375556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:26:23.673967  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:23.674045  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:23.733069  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:23.733098  375556 cri.go:89] found id: ""
	I0108 22:26:23.733109  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:23.733168  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.739866  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:23.739960  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:23.807666  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:23.807693  375556 cri.go:89] found id: ""
	I0108 22:26:23.807704  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:23.807765  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.813449  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:23.813543  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:23.876403  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:23.876431  375556 cri.go:89] found id: ""
	I0108 22:26:23.876442  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:23.876511  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.885128  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:23.885232  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:23.953100  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:23.953129  375556 cri.go:89] found id: ""
	I0108 22:26:23.953139  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:23.953211  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.960146  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:23.960246  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:24.022581  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:24.022608  375556 cri.go:89] found id: ""
	I0108 22:26:24.022616  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:24.022669  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.029307  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:24.029399  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:24.088026  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:24.088063  375556 cri.go:89] found id: ""
	I0108 22:26:24.088074  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:24.088151  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.094051  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:24.094175  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:24.156867  375556 cri.go:89] found id: ""
	I0108 22:26:24.156902  375556 logs.go:284] 0 containers: []
	W0108 22:26:24.156914  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:24.156924  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:24.157020  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:24.219558  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:24.219581  375556 cri.go:89] found id: ""
	I0108 22:26:24.219589  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:24.219641  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.224823  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:24.224866  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:24.321726  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:24.321777  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:24.749669  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:24.749737  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:24.821645  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:24.821690  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:24.883279  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:24.883325  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:24.942199  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:24.942253  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:25.003721  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:25.003766  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:25.051208  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:25.051241  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:25.102533  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:25.102580  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:25.158556  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:25.158610  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:25.263571  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:25.263618  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:25.281380  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:25.281414  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:27.948731  375556 system_pods.go:59] 8 kube-system pods found
	I0108 22:26:27.948767  375556 system_pods.go:61] "coredns-5dd5756b68-r27zw" [c82dae88-118a-4e13-a714-1240d48dfc4e] Running
	I0108 22:26:27.948774  375556 system_pods.go:61] "etcd-default-k8s-diff-port-292054" [d8145b74-cc40-40eb-b9e2-5a19e096e5f7] Running
	I0108 22:26:27.948782  375556 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-292054" [5bb945e6-e633-4fdc-bbec-16c72cb3ca88] Running
	I0108 22:26:27.948787  375556 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-292054" [8d376b79-f3ab-4f74-a927-e3f1775853c0] Running
	I0108 22:26:27.948794  375556 system_pods.go:61] "kube-proxy-bwmkb" [c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2] Running
	I0108 22:26:27.948800  375556 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-292054" [d125cdbe-49e2-48af-bcf8-44d514cd4a1c] Running
	I0108 22:26:27.948811  375556 system_pods.go:61] "metrics-server-57f55c9bc5-jm9lg" [b94afab5-f573-4ed1-bc29-64eb8e90c574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:26:27.948827  375556 system_pods.go:61] "storage-provisioner" [05c2430d-d84e-415e-83b3-c32e7635fe74] Running
	I0108 22:26:27.948839  375556 system_pods.go:74] duration metric: took 4.274897836s to wait for pod list to return data ...
	I0108 22:26:27.948852  375556 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:26:27.952207  375556 default_sa.go:45] found service account: "default"
	I0108 22:26:27.952241  375556 default_sa.go:55] duration metric: took 3.378283ms for default service account to be created ...
	I0108 22:26:27.952252  375556 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:26:27.958708  375556 system_pods.go:86] 8 kube-system pods found
	I0108 22:26:27.958744  375556 system_pods.go:89] "coredns-5dd5756b68-r27zw" [c82dae88-118a-4e13-a714-1240d48dfc4e] Running
	I0108 22:26:27.958752  375556 system_pods.go:89] "etcd-default-k8s-diff-port-292054" [d8145b74-cc40-40eb-b9e2-5a19e096e5f7] Running
	I0108 22:26:27.958757  375556 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-292054" [5bb945e6-e633-4fdc-bbec-16c72cb3ca88] Running
	I0108 22:26:27.958763  375556 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-292054" [8d376b79-f3ab-4f74-a927-e3f1775853c0] Running
	I0108 22:26:27.958767  375556 system_pods.go:89] "kube-proxy-bwmkb" [c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2] Running
	I0108 22:26:27.958772  375556 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-292054" [d125cdbe-49e2-48af-bcf8-44d514cd4a1c] Running
	I0108 22:26:27.958849  375556 system_pods.go:89] "metrics-server-57f55c9bc5-jm9lg" [b94afab5-f573-4ed1-bc29-64eb8e90c574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:26:27.958860  375556 system_pods.go:89] "storage-provisioner" [05c2430d-d84e-415e-83b3-c32e7635fe74] Running
	I0108 22:26:27.958870  375556 system_pods.go:126] duration metric: took 6.613305ms to wait for k8s-apps to be running ...
	I0108 22:26:27.958892  375556 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:26:27.958967  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:26:27.979435  375556 system_svc.go:56] duration metric: took 20.53748ms WaitForService to wait for kubelet.
	I0108 22:26:27.979474  375556 kubeadm.go:581] duration metric: took 4m18.449992338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:26:27.979500  375556 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:26:27.983117  375556 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:26:27.983146  375556 node_conditions.go:123] node cpu capacity is 2
	I0108 22:26:27.983159  375556 node_conditions.go:105] duration metric: took 3.652979ms to run NodePressure ...
	I0108 22:26:27.983171  375556 start.go:228] waiting for startup goroutines ...
	I0108 22:26:27.983177  375556 start.go:233] waiting for cluster config update ...
	I0108 22:26:27.983187  375556 start.go:242] writing updated cluster config ...
	I0108 22:26:27.983521  375556 ssh_runner.go:195] Run: rm -f paused
	I0108 22:26:28.042279  375556 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:26:28.044728  375556 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-292054" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:15:41 UTC, ends at Mon 2024-01-08 22:34:35 UTC. --
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.262692213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=35794543-5fb1-476e-b67d-52e63aa8c72c name=/runtime.v1.RuntimeService/Version
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.265048177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4fbcd99e-aa14-4fa7-bc70-a44ed1ddaad3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.266296908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753275266009863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4fbcd99e-aa14-4fa7-bc70-a44ed1ddaad3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.267294984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5b39564a-1f49-46b5-b778-ad9f429ab24f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.267357143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5b39564a-1f49-46b5-b778-ad9f429ab24f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.267631007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131,PodSandboxId:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752479939620404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{io.kubernetes.container.hash: 312af6c7,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c,PodSandboxId:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752479229322417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,},Annotations:map[string]string{io.kubernetes.container.hash: d0629fb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11,PodSandboxId:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752478190581728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,},Annotations:map[string]string{io.kubernetes.container.hash: 2df9dc56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a,PodSandboxId:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752453709253427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 4b5b41db3bfd708974d709b20906a429,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7,PodSandboxId:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752453775227644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,},Annotations:
map[string]string{io.kubernetes.container.hash: 79124fc6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8,PodSandboxId:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752453218449993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f190eb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13,PodSandboxId:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752453031434921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5704fc2de7d01cdebc5c77e98b2033
d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5b39564a-1f49-46b5-b778-ad9f429ab24f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.312994608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1f7427bf-fcd7-4de1-8cb9-fb6a1f3fddde name=/runtime.v1.RuntimeService/Version
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.313078789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1f7427bf-fcd7-4de1-8cb9-fb6a1f3fddde name=/runtime.v1.RuntimeService/Version
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.314376546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d1c99dbe-8fec-4e2d-a0ef-18a0f4f5a3e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.314995031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753275314975174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d1c99dbe-8fec-4e2d-a0ef-18a0f4f5a3e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.316088825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=628f054f-5bc2-4bcf-864a-3d86c07bc144 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.316181041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=628f054f-5bc2-4bcf-864a-3d86c07bc144 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.316373779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131,PodSandboxId:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752479939620404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{io.kubernetes.container.hash: 312af6c7,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c,PodSandboxId:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752479229322417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,},Annotations:map[string]string{io.kubernetes.container.hash: d0629fb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11,PodSandboxId:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752478190581728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,},Annotations:map[string]string{io.kubernetes.container.hash: 2df9dc56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a,PodSandboxId:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752453709253427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 4b5b41db3bfd708974d709b20906a429,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7,PodSandboxId:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752453775227644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,},Annotations:
map[string]string{io.kubernetes.container.hash: 79124fc6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8,PodSandboxId:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752453218449993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f190eb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13,PodSandboxId:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752453031434921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5704fc2de7d01cdebc5c77e98b2033
d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=628f054f-5bc2-4bcf-864a-3d86c07bc144 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.362118732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fd55652d-7bcb-4c17-8704-1ed06e86df18 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.362276532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fd55652d-7bcb-4c17-8704-1ed06e86df18 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.365598264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=76e0fb06-a247-4061-8f19-45208d5b4d28 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.366060617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753275366044043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=76e0fb06-a247-4061-8f19-45208d5b4d28 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.366881087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e596fe1a-ec91-4f69-8e7c-699cec7f9474 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.367344933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e596fe1a-ec91-4f69-8e7c-699cec7f9474 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.368695387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131,PodSandboxId:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752479939620404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{io.kubernetes.container.hash: 312af6c7,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c,PodSandboxId:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752479229322417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,},Annotations:map[string]string{io.kubernetes.container.hash: d0629fb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11,PodSandboxId:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752478190581728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,},Annotations:map[string]string{io.kubernetes.container.hash: 2df9dc56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a,PodSandboxId:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752453709253427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 4b5b41db3bfd708974d709b20906a429,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7,PodSandboxId:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752453775227644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,},Annotations:
map[string]string{io.kubernetes.container.hash: 79124fc6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8,PodSandboxId:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752453218449993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f190eb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13,PodSandboxId:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752453031434921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5704fc2de7d01cdebc5c77e98b2033
d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e596fe1a-ec91-4f69-8e7c-699cec7f9474 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.373174515Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=22f98ebf-8a3f-4124-9367-ae87e339f465 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.373397054Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:949c6275-6836-4035-89f5-f2d2c2caaa89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752478996928842,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-08T22:21:18.657335738Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f509e054cc152ce088399f339d3e0dc8f083e4b817813cbfad7f09a97a98590,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-qhjlv,Uid:f1bff39b-c944-4de0-a5b8-eb239e91c6db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752478640422659,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-qhjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1bff39b-c944-4de0-a5b8-eb239e91c6d
b,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:21:18.293489919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jbz6n,Uid:562faf84-b986-4f0e-97cd-41aa5ac7ea17,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752476474942326,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:21:15.783303052Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&PodSandboxMetadata{Name:kube-proxy-hqj9b,Uid:14b3f3bd-1d65-4382-adc2-09
344b54463d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752476234169088,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:21:15.597097963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-903819,Uid:a5704fc2de7d01cdebc5c77e98b2033d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752452488211208,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: a5704fc2de7d01cdebc5c77e98b2033d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a5704fc2de7d01cdebc5c77e98b2033d,kubernetes.io/config.seen: 2024-01-08T22:20:51.938915783Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-903819,Uid:85c89db12549c8e4094a598a3e86a27a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752452476991195,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.132:8443,kubernetes.io/config.hash: 85c89db12549c8e4094a598a3e86a27a,kubernetes.io/config.seen: 2024-01-08T22:20:51.938913751Z,k
ubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-903819,Uid:4b5b41db3bfd708974d709b20906a429,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752452440840297,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b5b41db3bfd708974d709b20906a429,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4b5b41db3bfd708974d709b20906a429,kubernetes.io/config.seen: 2024-01-08T22:20:51.938917022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-903819,Uid:085fa14de085a567626002de5792a237,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt
:1704752452435112550,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.132:2379,kubernetes.io/config.hash: 085fa14de085a567626002de5792a237,kubernetes.io/config.seen: 2024-01-08T22:20:51.938906335Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=22f98ebf-8a3f-4124-9367-ae87e339f465 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.374634915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9e2319f8-9494-4bfa-acba-e3ba4dbe3abf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.374818493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9e2319f8-9494-4bfa-acba-e3ba4dbe3abf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:34:35 embed-certs-903819 crio[740]: time="2024-01-08 22:34:35.375034436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131,PodSandboxId:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752479939620404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{io.kubernetes.container.hash: 312af6c7,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c,PodSandboxId:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752479229322417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,},Annotations:map[string]string{io.kubernetes.container.hash: d0629fb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11,PodSandboxId:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752478190581728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,},Annotations:map[string]string{io.kubernetes.container.hash: 2df9dc56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a,PodSandboxId:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752453709253427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 4b5b41db3bfd708974d709b20906a429,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7,PodSandboxId:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752453775227644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,},Annotations:
map[string]string{io.kubernetes.container.hash: 79124fc6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8,PodSandboxId:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752453218449993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f190eb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13,PodSandboxId:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752453031434921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5704fc2de7d01cdebc5c77e98b2033
d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9e2319f8-9494-4bfa-acba-e3ba4dbe3abf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10be43da68cf5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   58a23ae790192       storage-provisioner
	3d668e971bd86       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   945ca354713a9       kube-proxy-hqj9b
	9ae7848fe3ee0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   0daf08aef77cd       coredns-5dd5756b68-jbz6n
	c5c66b00d0275       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   39fac5fc8ed4c       etcd-embed-certs-903819
	5430b769556bb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   8858ded44e890       kube-scheduler-embed-certs-903819
	8e83b759c6cec       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   3a4abfae5e370       kube-apiserver-embed-certs-903819
	ceba1f5202ccd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   a9a2e4161dc76       kube-controller-manager-embed-certs-903819
	
	
	==> coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               embed-certs-903819
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-903819
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=embed-certs-903819
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_21_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:20:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-903819
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:34:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:31:37 +0000   Mon, 08 Jan 2024 22:20:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:31:37 +0000   Mon, 08 Jan 2024 22:20:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:31:37 +0000   Mon, 08 Jan 2024 22:20:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:31:37 +0000   Mon, 08 Jan 2024 22:21:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.132
	  Hostname:    embed-certs-903819
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f183ef036284b6e80008b87d0d3f30b
	  System UUID:                5f183ef0-3628-4b6e-8000-8b87d0d3f30b
	  Boot ID:                    bd1baecc-be37-4aa8-bd81-dd09855d135b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jbz6n                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-903819                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-903819             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-903819    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-hqj9b                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-903819             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-qhjlv               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-903819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-903819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-903819 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node embed-certs-903819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node embed-certs-903819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node embed-certs-903819 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node embed-certs-903819 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node embed-certs-903819 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-903819 event: Registered Node embed-certs-903819 in Controller
	
	
	==> dmesg <==
	[Jan 8 22:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074413] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.616771] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.855772] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.164640] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.588704] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.442888] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.126566] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.166256] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[  +0.114530] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +0.252458] systemd-fstab-generator[724]: Ignoring "noauto" for root device
	[Jan 8 22:16] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[ +22.475701] kauditd_printk_skb: 34 callbacks suppressed
	[Jan 8 22:20] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.669685] systemd-fstab-generator[3733]: Ignoring "noauto" for root device
	[Jan 8 22:21] systemd-fstab-generator[4059]: Ignoring "noauto" for root device
	
	
	==> etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] <==
	{"level":"info","ts":"2024-01-08T22:20:56.868498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:56.868572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:56.868616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 received MsgPreVoteResp from a7da7c7e26779cb7 at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:56.868628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:56.868634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 received MsgVoteResp from a7da7c7e26779cb7 at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:56.868642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a7da7c7e26779cb7 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:56.868649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a7da7c7e26779cb7 elected leader a7da7c7e26779cb7 at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:56.870316Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a7da7c7e26779cb7","local-member-attributes":"{Name:embed-certs-903819 ClientURLs:[https://192.168.72.132:2379]}","request-path":"/0/members/a7da7c7e26779cb7/attributes","cluster-id":"146bd9643c3d2907","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T22:20:56.870333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:20:56.870489Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:56.871862Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"146bd9643c3d2907","local-member-id":"a7da7c7e26779cb7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:56.871986Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:56.872015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:20:56.872031Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:56.871991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T22:20:56.872686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T22:20:56.872799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T22:20:56.873098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.132:2379"}
	{"level":"info","ts":"2024-01-08T22:21:16.245627Z","caller":"traceutil/trace.go:171","msg":"trace[870548640] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"146.19177ms","start":"2024-01-08T22:21:16.099339Z","end":"2024-01-08T22:21:16.245531Z","steps":["trace[870548640] 'process raft request'  (duration: 85.502943ms)","trace[870548640] 'compare'  (duration: 46.062875ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:21:16.246111Z","caller":"traceutil/trace.go:171","msg":"trace[821539251] linearizableReadLoop","detail":"{readStateIndex:393; appliedIndex:391; }","duration":"139.245101ms","start":"2024-01-08T22:21:16.106833Z","end":"2024-01-08T22:21:16.246078Z","steps":["trace[821539251] 'read index received'  (duration: 3.70029ms)","trace[821539251] 'applied index is now lower than readState.Index'  (duration: 135.543206ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T22:21:16.247508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.662598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-01-08T22:21:16.247937Z","caller":"traceutil/trace.go:171","msg":"trace[115285580] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:385; }","duration":"141.148397ms","start":"2024-01-08T22:21:16.106767Z","end":"2024-01-08T22:21:16.247915Z","steps":["trace[115285580] 'agreement among raft nodes before linearized reading'  (duration: 140.588542ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:30:56.918645Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2024-01-08T22:30:56.922058Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":712,"took":"2.59318ms","hash":3077321258}
	{"level":"info","ts":"2024-01-08T22:30:56.922148Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3077321258,"revision":712,"compact-revision":-1}
	
	
	==> kernel <==
	 22:34:35 up 19 min,  0 users,  load average: 0.24, 0.17, 0.18
	Linux embed-certs-903819 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] <==
	I0108 22:30:58.730489       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:30:59.729959       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:30:59.730079       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:30:59.730091       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:30:59.730261       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:30:59.730282       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:30:59.731579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:31:58.568176       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:31:59.730294       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:59.730640       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:31:59.730688       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:31:59.732334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:59.732564       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:31:59.732599       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:32:58.568386       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:33:58.568406       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:33:59.731960       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:33:59.732164       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:33:59.732193       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:33:59.733626       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:33:59.733678       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:33:59.733685       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] <==
	I0108 22:28:45.338800       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:29:14.830209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:29:15.348441       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:29:44.836861       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:29:45.357398       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:30:14.848111       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:30:15.367591       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:30:44.860073       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:30:45.379303       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:14.869685       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:15.393146       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:44.878285       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:45.405271       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:32:14.888422       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:15.418190       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 22:32:19.039162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="260.028µs"
	I0108 22:32:30.042907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="186.078µs"
	E0108 22:32:44.898075       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:45.430805       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:33:14.905963       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:15.442081       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:33:44.912354       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:45.452568       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:14.927902       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:15.463577       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] <==
	I0108 22:21:19.878063       1 server_others.go:69] "Using iptables proxy"
	I0108 22:21:19.918874       1 node.go:141] Successfully retrieved node IP: 192.168.72.132
	I0108 22:21:20.033252       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 22:21:20.033347       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:21:20.039156       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:21:20.040672       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:21:20.041863       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:21:20.042065       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:21:20.047171       1 config.go:188] "Starting service config controller"
	I0108 22:21:20.053451       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:21:20.055633       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:21:20.055844       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:21:20.060091       1 config.go:315] "Starting node config controller"
	I0108 22:21:20.060207       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:21:20.157248       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:21:20.160841       1 shared_informer.go:318] Caches are synced for node config
	I0108 22:21:20.161257       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] <==
	W0108 22:20:59.666521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:20:59.666585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:20:59.666639       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:20:59.666647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:20:59.713099       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:20:59.713161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:20:59.813168       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:20:59.813266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:20:59.870865       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:20:59.871015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:21:00.027023       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:00.027120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:00.032902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:00.033015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:00.095000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:00.095160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:00.135271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:00.135362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:00.193816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:21:00.193977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:21:00.240204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:21:00.240280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 22:21:00.316643       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:21:00.316786       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 22:21:02.150181       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:15:41 UTC, ends at Mon 2024-01-08 22:34:36 UTC. --
	Jan 08 22:32:03 embed-certs-903819 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:32:03 embed-certs-903819 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:32:07 embed-certs-903819 kubelet[4066]: E0108 22:32:07.032056    4066 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 22:32:07 embed-certs-903819 kubelet[4066]: E0108 22:32:07.032159    4066 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 22:32:07 embed-certs-903819 kubelet[4066]: E0108 22:32:07.032500    4066 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5tg6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-qhjlv_kube-system(f1bff39b-c944-4de0-a5b8-eb239e91c6db): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 22:32:07 embed-certs-903819 kubelet[4066]: E0108 22:32:07.032559    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:32:19 embed-certs-903819 kubelet[4066]: E0108 22:32:19.016695    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:32:30 embed-certs-903819 kubelet[4066]: E0108 22:32:30.015608    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:32:45 embed-certs-903819 kubelet[4066]: E0108 22:32:45.015255    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:32:57 embed-certs-903819 kubelet[4066]: E0108 22:32:57.017847    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:33:03 embed-certs-903819 kubelet[4066]: E0108 22:33:03.151924    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:33:03 embed-certs-903819 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:33:03 embed-certs-903819 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:33:03 embed-certs-903819 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:33:12 embed-certs-903819 kubelet[4066]: E0108 22:33:12.016591    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:33:25 embed-certs-903819 kubelet[4066]: E0108 22:33:25.014545    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:33:40 embed-certs-903819 kubelet[4066]: E0108 22:33:40.015356    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:33:53 embed-certs-903819 kubelet[4066]: E0108 22:33:53.014569    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:34:03 embed-certs-903819 kubelet[4066]: E0108 22:34:03.151056    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:34:03 embed-certs-903819 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:34:03 embed-certs-903819 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:34:03 embed-certs-903819 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:34:08 embed-certs-903819 kubelet[4066]: E0108 22:34:08.015013    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:34:20 embed-certs-903819 kubelet[4066]: E0108 22:34:20.016182    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:34:31 embed-certs-903819 kubelet[4066]: E0108 22:34:31.015028    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	
	
	==> storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] <==
	I0108 22:21:20.096970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:21:20.121619       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:21:20.122012       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:21:20.143261       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:21:20.145412       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"769a2d5e-78de-4bba-b7b8-4f926749b3f6", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-903819_f92207b3-5c18-4530-b46a-4c83fce84323 became leader
	I0108 22:21:20.145603       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-903819_f92207b3-5c18-4530-b46a-4c83fce84323!
	I0108 22:21:20.252696       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-903819_f92207b3-5c18-4530-b46a-4c83fce84323!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-903819 -n embed-certs-903819
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-903819 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qhjlv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-903819 describe pod metrics-server-57f55c9bc5-qhjlv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-903819 describe pod metrics-server-57f55c9bc5-qhjlv: exit status 1 (86.735727ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qhjlv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-903819 describe pod metrics-server-57f55c9bc5-qhjlv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (520.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079759 -n old-k8s-version-079759
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-08 22:35:01.846299197 +0000 UTC m=+5577.547421548
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-079759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-079759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.656µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-079759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-079759 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-079759 logs -n 25: (2.051535814s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-523607                              | cert-expiration-523607       | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343954 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | disable-driver-mounts-343954                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:09 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079759        | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC | 08 Jan 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-675668             | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-903819            | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-292054  | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC | 08 Jan 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079759             | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-675668                  | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-903819                 | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-292054       | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:26 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:11:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:11:46.087099  375556 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:11:46.087257  375556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:46.087268  375556 out.go:309] Setting ErrFile to fd 2...
	I0108 22:11:46.087273  375556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:11:46.087523  375556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:11:46.088153  375556 out.go:303] Setting JSON to false
	I0108 22:11:46.089299  375556 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10432,"bootTime":1704741474,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:11:46.089374  375556 start.go:138] virtualization: kvm guest
	I0108 22:11:46.092180  375556 out.go:177] * [default-k8s-diff-port-292054] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:11:46.093649  375556 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:11:46.093727  375556 notify.go:220] Checking for updates...
	I0108 22:11:46.095251  375556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:11:46.097142  375556 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:11:46.099048  375556 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:11:46.100864  375556 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:11:46.102762  375556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:11:46.105085  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:11:46.105575  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:11:46.105654  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:11:46.122253  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0108 22:11:46.122758  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:11:46.123342  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:11:46.123412  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:11:46.123752  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:11:46.123910  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:11:46.124157  375556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:11:46.124499  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:11:46.124539  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:11:46.140751  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0108 22:11:46.141282  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:11:46.141773  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:11:46.141798  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:11:46.142141  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:11:46.142444  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:11:46.184643  375556 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 22:11:46.186001  375556 start.go:298] selected driver: kvm2
	I0108 22:11:46.186020  375556 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:11:46.186148  375556 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:11:46.186947  375556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:46.187023  375556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:11:46.203781  375556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:11:46.204243  375556 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:11:46.204341  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:11:46.204355  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:11:46.204368  375556 start_flags.go:321] config:
	{Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-29205
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:11:46.204574  375556 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:11:46.206922  375556 out.go:177] * Starting control plane node default-k8s-diff-port-292054 in cluster default-k8s-diff-port-292054
	I0108 22:11:49.059974  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:11:46.208771  375556 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:11:46.208837  375556 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:11:46.208846  375556 cache.go:56] Caching tarball of preloaded images
	I0108 22:11:46.208953  375556 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:11:46.208964  375556 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:11:46.209090  375556 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:11:46.209292  375556 start.go:365] acquiring machines lock for default-k8s-diff-port-292054: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:11:52.131718  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:11:58.211727  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:01.283728  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:07.363651  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:10.435843  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:16.515718  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:19.587893  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:25.667716  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:28.739741  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:34.819670  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:37.891747  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:43.971702  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:47.043706  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:53.123662  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:12:56.195726  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:02.275699  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:05.347708  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:11.427670  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:14.499733  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:20.579716  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:23.651809  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:29.731813  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:32.803834  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:38.883645  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:41.955722  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:48.035781  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:51.107833  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:13:57.187725  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:00.259743  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:06.339763  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:09.411776  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:15.491797  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:18.563880  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:24.643806  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:27.715717  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:33.795783  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:36.867725  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:42.947651  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:46.019719  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:52.099719  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:14:55.171662  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:01.251699  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:04.323666  374880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0108 22:15:07.328244  375205 start.go:369] acquired machines lock for "no-preload-675668" in 4m2.333038111s
	I0108 22:15:07.328384  375205 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:07.328398  375205 fix.go:54] fixHost starting: 
	I0108 22:15:07.328972  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:07.329012  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:07.346002  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0108 22:15:07.346606  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:07.347087  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:15:07.347112  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:07.347614  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:07.347816  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:07.347977  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:15:07.349843  375205 fix.go:102] recreateIfNeeded on no-preload-675668: state=Stopped err=<nil>
	I0108 22:15:07.349873  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	W0108 22:15:07.350055  375205 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:07.352092  375205 out.go:177] * Restarting existing kvm2 VM for "no-preload-675668" ...
	I0108 22:15:07.325708  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:07.325751  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:15:07.327981  374880 machine.go:91] provisioned docker machine in 4m37.376179376s
	I0108 22:15:07.328067  374880 fix.go:56] fixHost completed within 4m37.402208453s
	I0108 22:15:07.328080  374880 start.go:83] releasing machines lock for "old-k8s-version-079759", held for 4m37.402236557s
	W0108 22:15:07.328149  374880 start.go:694] error starting host: provision: host is not running
	W0108 22:15:07.328386  374880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 22:15:07.328401  374880 start.go:709] Will try again in 5 seconds ...
	I0108 22:15:07.353648  375205 main.go:141] libmachine: (no-preload-675668) Calling .Start
	I0108 22:15:07.353904  375205 main.go:141] libmachine: (no-preload-675668) Ensuring networks are active...
	I0108 22:15:07.354917  375205 main.go:141] libmachine: (no-preload-675668) Ensuring network default is active
	I0108 22:15:07.355390  375205 main.go:141] libmachine: (no-preload-675668) Ensuring network mk-no-preload-675668 is active
	I0108 22:15:07.355764  375205 main.go:141] libmachine: (no-preload-675668) Getting domain xml...
	I0108 22:15:07.356506  375205 main.go:141] libmachine: (no-preload-675668) Creating domain...
	I0108 22:15:08.673735  375205 main.go:141] libmachine: (no-preload-675668) Waiting to get IP...
	I0108 22:15:08.674861  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:08.675407  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:08.675502  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:08.675369  376073 retry.go:31] will retry after 298.445271ms: waiting for machine to come up
	I0108 22:15:08.976053  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:08.976594  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:08.976624  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:08.976525  376073 retry.go:31] will retry after 372.862343ms: waiting for machine to come up
	I0108 22:15:09.351338  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:09.351843  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:09.351864  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:09.351801  376073 retry.go:31] will retry after 463.145179ms: waiting for machine to come up
	I0108 22:15:09.816629  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:09.817035  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:09.817059  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:09.816979  376073 retry.go:31] will retry after 390.229237ms: waiting for machine to come up
	I0108 22:15:12.328668  374880 start.go:365] acquiring machines lock for old-k8s-version-079759: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:15:10.208639  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:10.209034  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:10.209068  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:10.208972  376073 retry.go:31] will retry after 547.133251ms: waiting for machine to come up
	I0108 22:15:10.758143  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:10.758742  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:10.758779  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:10.758673  376073 retry.go:31] will retry after 833.304996ms: waiting for machine to come up
	I0108 22:15:11.594018  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:11.594517  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:11.594551  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:11.594482  376073 retry.go:31] will retry after 1.155542967s: waiting for machine to come up
	I0108 22:15:12.751694  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:12.752196  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:12.752233  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:12.752162  376073 retry.go:31] will retry after 1.197873107s: waiting for machine to come up
	I0108 22:15:13.951593  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:13.952050  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:13.952072  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:13.952005  376073 retry.go:31] will retry after 1.257059014s: waiting for machine to come up
	I0108 22:15:15.211632  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:15.212133  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:15.212161  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:15.212090  376073 retry.go:31] will retry after 2.27321783s: waiting for machine to come up
	I0108 22:15:17.487177  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:17.487684  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:17.487712  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:17.487631  376073 retry.go:31] will retry after 2.218202362s: waiting for machine to come up
	I0108 22:15:19.709130  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:19.709618  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:19.709651  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:19.709552  376073 retry.go:31] will retry after 2.976711307s: waiting for machine to come up
	I0108 22:15:22.687741  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:22.688337  375205 main.go:141] libmachine: (no-preload-675668) DBG | unable to find current IP address of domain no-preload-675668 in network mk-no-preload-675668
	I0108 22:15:22.688373  375205 main.go:141] libmachine: (no-preload-675668) DBG | I0108 22:15:22.688238  376073 retry.go:31] will retry after 4.028238242s: waiting for machine to come up
	I0108 22:15:28.088862  375293 start.go:369] acquired machines lock for "embed-certs-903819" in 4m15.164556555s
	I0108 22:15:28.088954  375293 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:28.088965  375293 fix.go:54] fixHost starting: 
	I0108 22:15:28.089472  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:28.089526  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:28.108636  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0108 22:15:28.109141  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:28.109765  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:15:28.109816  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:28.110214  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:28.110458  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:28.110642  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:15:28.112595  375293 fix.go:102] recreateIfNeeded on embed-certs-903819: state=Stopped err=<nil>
	I0108 22:15:28.112635  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	W0108 22:15:28.112883  375293 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:28.115226  375293 out.go:177] * Restarting existing kvm2 VM for "embed-certs-903819" ...
	I0108 22:15:26.721451  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.721880  375205 main.go:141] libmachine: (no-preload-675668) Found IP for machine: 192.168.61.153
	I0108 22:15:26.721905  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has current primary IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.721912  375205 main.go:141] libmachine: (no-preload-675668) Reserving static IP address...
	I0108 22:15:26.722449  375205 main.go:141] libmachine: (no-preload-675668) Reserved static IP address: 192.168.61.153
	I0108 22:15:26.722475  375205 main.go:141] libmachine: (no-preload-675668) Waiting for SSH to be available...
	I0108 22:15:26.722498  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "no-preload-675668", mac: "52:54:00:08:3b:59", ip: "192.168.61.153"} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.722528  375205 main.go:141] libmachine: (no-preload-675668) DBG | skip adding static IP to network mk-no-preload-675668 - found existing host DHCP lease matching {name: "no-preload-675668", mac: "52:54:00:08:3b:59", ip: "192.168.61.153"}
	I0108 22:15:26.722545  375205 main.go:141] libmachine: (no-preload-675668) DBG | Getting to WaitForSSH function...
	I0108 22:15:26.724512  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.724861  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.724898  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.725004  375205 main.go:141] libmachine: (no-preload-675668) DBG | Using SSH client type: external
	I0108 22:15:26.725078  375205 main.go:141] libmachine: (no-preload-675668) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa (-rw-------)
	I0108 22:15:26.725130  375205 main.go:141] libmachine: (no-preload-675668) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:15:26.725152  375205 main.go:141] libmachine: (no-preload-675668) DBG | About to run SSH command:
	I0108 22:15:26.725172  375205 main.go:141] libmachine: (no-preload-675668) DBG | exit 0
	I0108 22:15:26.815569  375205 main.go:141] libmachine: (no-preload-675668) DBG | SSH cmd err, output: <nil>: 
	I0108 22:15:26.816005  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetConfigRaw
	I0108 22:15:26.816711  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:26.819269  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.819636  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.819681  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.819964  375205 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/config.json ...
	I0108 22:15:26.820191  375205 machine.go:88] provisioning docker machine ...
	I0108 22:15:26.820215  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:26.820446  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:26.820626  375205 buildroot.go:166] provisioning hostname "no-preload-675668"
	I0108 22:15:26.820648  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:26.820790  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:26.823021  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.823390  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.823421  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.823567  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:26.823781  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.823943  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.824103  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:26.824331  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:26.824924  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:26.824958  375205 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-675668 && echo "no-preload-675668" | sudo tee /etc/hostname
	I0108 22:15:26.960664  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-675668
	
	I0108 22:15:26.960713  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:26.964110  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.964397  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:26.964437  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:26.964605  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:26.964918  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.965153  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:26.965334  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:26.965543  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:26.965958  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:26.965985  375205 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-675668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-675668/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-675668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:15:27.102584  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:27.102632  375205 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:15:27.102663  375205 buildroot.go:174] setting up certificates
	I0108 22:15:27.102678  375205 provision.go:83] configureAuth start
	I0108 22:15:27.102688  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetMachineName
	I0108 22:15:27.103024  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:27.105986  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.106379  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.106400  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.106586  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.108670  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.109003  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.109029  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.109216  375205 provision.go:138] copyHostCerts
	I0108 22:15:27.109300  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:15:27.109320  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:15:27.109426  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:15:27.109561  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:15:27.109571  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:15:27.109599  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:15:27.109663  375205 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:15:27.109670  375205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:15:27.109691  375205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:15:27.109751  375205 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.no-preload-675668 san=[192.168.61.153 192.168.61.153 localhost 127.0.0.1 minikube no-preload-675668]
	I0108 22:15:27.297801  375205 provision.go:172] copyRemoteCerts
	I0108 22:15:27.297888  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:15:27.297915  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.301050  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.301503  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.301545  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.301737  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.301955  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.302121  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.302265  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:27.394076  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:15:27.420873  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:15:27.446852  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:15:27.475352  375205 provision.go:86] duration metric: configureAuth took 372.6598ms
	I0108 22:15:27.475406  375205 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:15:27.475661  375205 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:15:27.475793  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.478557  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.478872  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.478906  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.479091  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.479354  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.479579  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.479768  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.479939  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:27.480273  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:27.480291  375205 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:15:27.822802  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:15:27.822834  375205 machine.go:91] provisioned docker machine in 1.002628424s
	I0108 22:15:27.822845  375205 start.go:300] post-start starting for "no-preload-675668" (driver="kvm2")
	I0108 22:15:27.822858  375205 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:15:27.822874  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:27.823282  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:15:27.823320  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.825948  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.826276  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.826298  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.826407  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.826597  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.826793  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.826922  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:27.918118  375205 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:15:27.922998  375205 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:15:27.923044  375205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:15:27.923151  375205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:15:27.923275  375205 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:15:27.923407  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:15:27.933715  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:27.960061  375205 start.go:303] post-start completed in 137.19795ms
	I0108 22:15:27.960109  375205 fix.go:56] fixHost completed within 20.631710493s
	I0108 22:15:27.960137  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:27.963254  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.963656  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:27.963688  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:27.964017  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:27.964325  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.964533  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:27.964722  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:27.964945  375205 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:27.965301  375205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.153 22 <nil> <nil>}
	I0108 22:15:27.965314  375205 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:15:28.088665  375205 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752128.028688224
	
	I0108 22:15:28.088696  375205 fix.go:206] guest clock: 1704752128.028688224
	I0108 22:15:28.088706  375205 fix.go:219] Guest: 2024-01-08 22:15:28.028688224 +0000 UTC Remote: 2024-01-08 22:15:27.960113957 +0000 UTC m=+263.145626296 (delta=68.574267ms)
	I0108 22:15:28.088734  375205 fix.go:190] guest clock delta is within tolerance: 68.574267ms
	I0108 22:15:28.088742  375205 start.go:83] releasing machines lock for "no-preload-675668", held for 20.760456272s
	I0108 22:15:28.088775  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.089136  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:28.091887  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.092255  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.092274  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.092537  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093187  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093416  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:15:28.093504  375205 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:15:28.093546  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:28.093722  375205 ssh_runner.go:195] Run: cat /version.json
	I0108 22:15:28.093769  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:15:28.096920  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.096969  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097385  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.097428  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097460  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:28.097482  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:28.097739  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:28.097767  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:15:28.098020  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:28.098074  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:15:28.098243  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:28.098254  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:15:28.098459  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:28.098460  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:15:28.221319  375205 ssh_runner.go:195] Run: systemctl --version
	I0108 22:15:28.227501  375205 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:15:28.379259  375205 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:15:28.386159  375205 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:15:28.386272  375205 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:15:28.404416  375205 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:15:28.404469  375205 start.go:475] detecting cgroup driver to use...
	I0108 22:15:28.404575  375205 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:15:28.421612  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:15:28.438920  375205 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:15:28.439001  375205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:15:28.455220  375205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:15:28.473982  375205 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:15:28.610132  375205 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:15:28.735485  375205 docker.go:219] disabling docker service ...
	I0108 22:15:28.735627  375205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:15:28.750327  375205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:15:28.768782  375205 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:15:28.891784  375205 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:15:29.006680  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:15:29.023187  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:15:29.043520  375205 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:15:29.043601  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.056442  375205 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:15:29.056525  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.066874  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.077969  375205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:29.090310  375205 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:15:29.102253  375205 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:15:29.114920  375205 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:15:29.115022  375205 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:15:29.131677  375205 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:15:29.142326  375205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:15:29.259562  375205 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:15:29.463482  375205 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:15:29.463554  375205 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:15:29.468579  375205 start.go:543] Will wait 60s for crictl version
	I0108 22:15:29.468665  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:29.476630  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:15:29.525900  375205 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:15:29.526053  375205 ssh_runner.go:195] Run: crio --version
	I0108 22:15:29.579948  375205 ssh_runner.go:195] Run: crio --version
	I0108 22:15:29.632573  375205 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0108 22:15:29.634161  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetIP
	I0108 22:15:29.637972  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:29.638472  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:15:29.638514  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:15:29.638828  375205 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0108 22:15:29.644170  375205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:29.658242  375205 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:15:29.658302  375205 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:29.701366  375205 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0108 22:15:29.701422  375205 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:15:29.701626  375205 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0108 22:15:29.701685  375205 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.701583  375205 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.701743  375205 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.701674  375205 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.701597  375205 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:29.701743  375205 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.701582  375205 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.703644  375205 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:29.703679  375205 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.703705  375205 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0108 22:15:29.703722  375205 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.703643  375205 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.703651  375205 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.703655  375205 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.703652  375205 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:28.117212  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Start
	I0108 22:15:28.117480  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring networks are active...
	I0108 22:15:28.118363  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring network default is active
	I0108 22:15:28.118783  375293 main.go:141] libmachine: (embed-certs-903819) Ensuring network mk-embed-certs-903819 is active
	I0108 22:15:28.119425  375293 main.go:141] libmachine: (embed-certs-903819) Getting domain xml...
	I0108 22:15:28.120203  375293 main.go:141] libmachine: (embed-certs-903819) Creating domain...
	I0108 22:15:29.474037  375293 main.go:141] libmachine: (embed-certs-903819) Waiting to get IP...
	I0108 22:15:29.475109  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:29.475735  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:29.475862  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:29.475696  376188 retry.go:31] will retry after 284.136631ms: waiting for machine to come up
	I0108 22:15:29.762077  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:29.762586  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:29.762614  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:29.762538  376188 retry.go:31] will retry after 303.052805ms: waiting for machine to come up
	I0108 22:15:30.067299  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:30.067947  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:30.067997  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:30.067822  376188 retry.go:31] will retry after 471.679894ms: waiting for machine to come up
	I0108 22:15:30.541942  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:30.542626  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:30.542658  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:30.542542  376188 retry.go:31] will retry after 534.448155ms: waiting for machine to come up
	I0108 22:15:31.078549  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:31.079168  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:31.079212  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:31.079092  376188 retry.go:31] will retry after 595.348277ms: waiting for machine to come up
	I0108 22:15:31.675832  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:31.676249  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:31.676278  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:31.676209  376188 retry.go:31] will retry after 618.587146ms: waiting for machine to come up
	I0108 22:15:32.296396  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:32.296982  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:32.297011  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:32.296820  376188 retry.go:31] will retry after 730.322233ms: waiting for machine to come up
	I0108 22:15:29.877942  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.891002  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:29.891714  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:29.893908  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0108 22:15:29.901880  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:29.959729  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:29.975241  375205 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0108 22:15:29.975301  375205 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:29.975308  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:29.975351  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.022214  375205 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.074289  375205 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0108 22:15:30.074350  375205 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:30.074422  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.107460  375205 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0108 22:15:30.107547  375205 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:30.107634  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.137086  375205 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0108 22:15:30.137155  375205 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:30.137227  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.156198  375205 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0108 22:15:30.156291  375205 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:30.156357  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163468  375205 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0108 22:15:30.163522  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0108 22:15:30.163532  375205 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:30.163563  375205 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0108 22:15:30.163616  375205 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.163654  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0108 22:15:30.163660  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163762  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0108 22:15:30.163779  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0108 22:15:30.163583  375205 ssh_runner.go:195] Run: which crictl
	I0108 22:15:30.163849  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0108 22:15:30.304360  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:30.304458  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0108 22:15:30.304478  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0108 22:15:30.304481  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:30.304564  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0108 22:15:30.304603  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.304568  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:30.304636  375205 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:15:30.304678  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:30.304738  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:30.307415  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:30.307516  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:30.322465  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0108 22:15:30.322505  375205 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.322616  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0108 22:15:30.323275  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390462  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0108 22:15:30.390530  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0108 22:15:30.390546  375205 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 22:15:30.390566  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390612  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0108 22:15:30.390651  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:30.390657  375205 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:32.649486  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.326834963s)
	I0108 22:15:32.649532  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0108 22:15:32.649560  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:32.649569  375205 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.258890537s)
	I0108 22:15:32.649612  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0108 22:15:32.649622  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0108 22:15:32.649573  375205 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.258898806s)
	I0108 22:15:32.649638  375205 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0108 22:15:33.028658  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:33.029086  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:33.029117  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:33.029023  376188 retry.go:31] will retry after 1.009306133s: waiting for machine to come up
	I0108 22:15:34.040145  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:34.040574  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:34.040610  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:34.040517  376188 retry.go:31] will retry after 1.215287271s: waiting for machine to come up
	I0108 22:15:35.258130  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:35.258735  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:35.258767  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:35.258669  376188 retry.go:31] will retry after 1.604579686s: waiting for machine to come up
	I0108 22:15:36.865156  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:36.865635  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:36.865671  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:36.865575  376188 retry.go:31] will retry after 1.938816817s: waiting for machine to come up
	I0108 22:15:35.937824  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.288173217s)
	I0108 22:15:35.937859  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0108 22:15:35.937899  375205 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:35.938005  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0108 22:15:38.805792  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:38.806390  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:38.806420  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:38.806318  376188 retry.go:31] will retry after 2.933374936s: waiting for machine to come up
	I0108 22:15:41.741267  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:41.741924  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:41.741962  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:41.741850  376188 retry.go:31] will retry after 3.549554778s: waiting for machine to come up
	I0108 22:15:40.512566  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.574525189s)
	I0108 22:15:40.512605  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0108 22:15:40.512642  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:40.512699  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0108 22:15:43.180687  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.667951486s)
	I0108 22:15:43.180730  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0108 22:15:43.180766  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:43.180849  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0108 22:15:44.539187  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.35830707s)
	I0108 22:15:44.539234  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0108 22:15:44.539274  375205 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:44.539335  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0108 22:15:45.294867  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:45.295522  375293 main.go:141] libmachine: (embed-certs-903819) DBG | unable to find current IP address of domain embed-certs-903819 in network mk-embed-certs-903819
	I0108 22:15:45.295572  375293 main.go:141] libmachine: (embed-certs-903819) DBG | I0108 22:15:45.295439  376188 retry.go:31] will retry after 5.642834673s: waiting for machine to come up
	I0108 22:15:46.498360  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.95899411s)
	I0108 22:15:46.498392  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0108 22:15:46.498417  375205 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:46.498473  375205 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0108 22:15:47.553626  375205 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.055107765s)
	I0108 22:15:47.553672  375205 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0108 22:15:47.553708  375205 cache_images.go:123] Successfully loaded all cached images
	I0108 22:15:47.553715  375205 cache_images.go:92] LoadImages completed in 17.852269213s
	I0108 22:15:47.553796  375205 ssh_runner.go:195] Run: crio config
	I0108 22:15:47.626385  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:15:47.626428  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:15:47.626471  375205 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:15:47.626503  375205 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.153 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-675668 NodeName:no-preload-675668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:15:47.626764  375205 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-675668"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:15:47.626889  375205 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-675668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-675668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:15:47.626994  375205 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0108 22:15:47.638161  375205 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:15:47.638263  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:15:47.648004  375205 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0108 22:15:47.667877  375205 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0108 22:15:47.685914  375205 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0108 22:15:47.705814  375205 ssh_runner.go:195] Run: grep 192.168.61.153	control-plane.minikube.internal$ /etc/hosts
	I0108 22:15:47.709842  375205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:47.724788  375205 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668 for IP: 192.168.61.153
	I0108 22:15:47.724877  375205 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:15:47.725349  375205 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:15:47.725420  375205 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:15:47.725541  375205 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.key
	I0108 22:15:47.725626  375205 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.key.0768d075
	I0108 22:15:47.725668  375205 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.key
	I0108 22:15:47.725793  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:15:47.725822  375205 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:15:47.725837  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:15:47.725861  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:15:47.725886  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:15:47.725908  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:15:47.725952  375205 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:47.727130  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:15:47.753432  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:15:47.780962  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:15:47.807446  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:15:47.834334  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:15:47.861638  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:15:47.889479  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:15:47.916119  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:15:47.944635  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:15:47.971740  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:15:47.998594  375205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:15:48.025907  375205 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:15:48.044525  375205 ssh_runner.go:195] Run: openssl version
	I0108 22:15:48.050542  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:15:48.061205  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.066945  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.067060  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:15:48.074266  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:15:48.084613  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:15:48.095856  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.101596  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.101677  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:15:48.108991  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:15:48.120690  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:15:48.130747  375205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.135480  375205 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.135576  375205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:15:48.141462  375205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:15:48.152597  375205 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:15:48.158657  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:15:48.165978  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:15:48.174164  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:15:48.181140  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:15:48.187819  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:15:48.194088  375205 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:15:48.200487  375205 kubeadm.go:404] StartCluster: {Name:no-preload-675668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-675668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.153 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:15:48.200612  375205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:15:48.200686  375205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:15:48.244804  375205 cri.go:89] found id: ""
	I0108 22:15:48.244894  375205 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:15:48.255502  375205 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:15:48.255549  375205 kubeadm.go:636] restartCluster start
	I0108 22:15:48.255625  375205 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:15:48.265914  375205 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:48.267815  375205 kubeconfig.go:92] found "no-preload-675668" server: "https://192.168.61.153:8443"
	I0108 22:15:48.271555  375205 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:15:48.281619  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:48.281694  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:48.293360  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:48.781917  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:48.782063  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:48.795101  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:49.281683  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:49.281784  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:49.295392  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:49.781910  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:49.782011  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:49.795016  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.309259  375556 start.go:369] acquired machines lock for "default-k8s-diff-port-292054" in 4m6.099929885s
	I0108 22:15:52.309332  375556 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:15:52.309353  375556 fix.go:54] fixHost starting: 
	I0108 22:15:52.309795  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:15:52.309827  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:15:52.327510  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
	I0108 22:15:52.328130  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:15:52.328844  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:15:52.328877  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:15:52.329458  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:15:52.329740  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:15:52.329938  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:15:52.331851  375556 fix.go:102] recreateIfNeeded on default-k8s-diff-port-292054: state=Stopped err=<nil>
	I0108 22:15:52.331887  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	W0108 22:15:52.332071  375556 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:15:52.334604  375556 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-292054" ...
	I0108 22:15:50.942498  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.943038  375293 main.go:141] libmachine: (embed-certs-903819) Found IP for machine: 192.168.72.132
	I0108 22:15:50.943076  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has current primary IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.943087  375293 main.go:141] libmachine: (embed-certs-903819) Reserving static IP address...
	I0108 22:15:50.943577  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "embed-certs-903819", mac: "52:54:00:73:74:da", ip: "192.168.72.132"} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:50.943606  375293 main.go:141] libmachine: (embed-certs-903819) Reserved static IP address: 192.168.72.132
	I0108 22:15:50.943620  375293 main.go:141] libmachine: (embed-certs-903819) DBG | skip adding static IP to network mk-embed-certs-903819 - found existing host DHCP lease matching {name: "embed-certs-903819", mac: "52:54:00:73:74:da", ip: "192.168.72.132"}
	I0108 22:15:50.943636  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Getting to WaitForSSH function...
	I0108 22:15:50.943655  375293 main.go:141] libmachine: (embed-certs-903819) Waiting for SSH to be available...
	I0108 22:15:50.945879  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.946330  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:50.946362  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:50.946493  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Using SSH client type: external
	I0108 22:15:50.946532  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa (-rw-------)
	I0108 22:15:50.946589  375293 main.go:141] libmachine: (embed-certs-903819) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:15:50.946606  375293 main.go:141] libmachine: (embed-certs-903819) DBG | About to run SSH command:
	I0108 22:15:50.946641  375293 main.go:141] libmachine: (embed-certs-903819) DBG | exit 0
	I0108 22:15:51.051155  375293 main.go:141] libmachine: (embed-certs-903819) DBG | SSH cmd err, output: <nil>: 
	I0108 22:15:51.051655  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetConfigRaw
	I0108 22:15:51.052363  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:51.054890  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.055247  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.055276  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.055618  375293 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/config.json ...
	I0108 22:15:51.055862  375293 machine.go:88] provisioning docker machine ...
	I0108 22:15:51.055887  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:51.056117  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.056263  375293 buildroot.go:166] provisioning hostname "embed-certs-903819"
	I0108 22:15:51.056283  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.056427  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.058406  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.058775  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.058822  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.058953  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.059154  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.059318  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.059478  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.059654  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.060145  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.060166  375293 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-903819 && echo "embed-certs-903819" | sudo tee /etc/hostname
	I0108 22:15:51.207967  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-903819
	
	I0108 22:15:51.208007  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.210549  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.210848  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.210876  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.211120  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.211372  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.211539  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.211707  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.211879  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.212375  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.212399  375293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-903819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-903819/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-903819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:15:51.356887  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:15:51.356936  375293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:15:51.356968  375293 buildroot.go:174] setting up certificates
	I0108 22:15:51.356997  375293 provision.go:83] configureAuth start
	I0108 22:15:51.357012  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetMachineName
	I0108 22:15:51.357424  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:51.360156  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.360553  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.360590  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.360735  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.363438  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.363850  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.363905  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.364020  375293 provision.go:138] copyHostCerts
	I0108 22:15:51.364111  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:15:51.364126  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:15:51.364193  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:15:51.364332  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:15:51.364347  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:15:51.364376  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:15:51.364453  375293 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:15:51.364463  375293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:15:51.364490  375293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:15:51.364552  375293 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.embed-certs-903819 san=[192.168.72.132 192.168.72.132 localhost 127.0.0.1 minikube embed-certs-903819]
	I0108 22:15:51.472949  375293 provision.go:172] copyRemoteCerts
	I0108 22:15:51.473023  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:15:51.473053  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.476622  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.476975  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.476997  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.477269  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.477524  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.477719  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.477852  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:51.576283  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:15:51.604809  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:15:51.633353  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:15:51.660375  375293 provision.go:86] duration metric: configureAuth took 303.352585ms
	I0108 22:15:51.660422  375293 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:15:51.660657  375293 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:15:51.660764  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:51.664337  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.664738  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:51.664796  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:51.665089  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:51.665394  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.665649  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:51.665823  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:51.666047  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:51.666595  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:51.666633  375293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:15:52.023397  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:15:52.023450  375293 machine.go:91] provisioned docker machine in 967.568803ms
	I0108 22:15:52.023469  375293 start.go:300] post-start starting for "embed-certs-903819" (driver="kvm2")
	I0108 22:15:52.023485  375293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:15:52.023514  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.023922  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:15:52.023979  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.026998  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.027417  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.027447  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.027665  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.027875  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.028050  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.028240  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.126087  375293 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:15:52.130371  375293 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:15:52.130414  375293 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:15:52.130509  375293 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:15:52.130609  375293 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:15:52.130738  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:15:52.139897  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:52.166648  375293 start.go:303] post-start completed in 143.156785ms
	I0108 22:15:52.166691  375293 fix.go:56] fixHost completed within 24.077726567s
	I0108 22:15:52.166721  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.169452  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.169849  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.169880  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.170156  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.170463  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.170716  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.170909  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.171089  375293 main.go:141] libmachine: Using SSH client type: native
	I0108 22:15:52.171520  375293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0108 22:15:52.171535  375293 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:15:52.309104  375293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752152.251541184
	
	I0108 22:15:52.309136  375293 fix.go:206] guest clock: 1704752152.251541184
	I0108 22:15:52.309146  375293 fix.go:219] Guest: 2024-01-08 22:15:52.251541184 +0000 UTC Remote: 2024-01-08 22:15:52.166696501 +0000 UTC m=+279.417512277 (delta=84.844683ms)
	I0108 22:15:52.309173  375293 fix.go:190] guest clock delta is within tolerance: 84.844683ms
	I0108 22:15:52.309180  375293 start.go:83] releasing machines lock for "embed-certs-903819", held for 24.220254192s
	I0108 22:15:52.309214  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.309514  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:52.312538  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.312905  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.312928  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.313161  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313692  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313879  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:15:52.313971  375293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:15:52.314031  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.314154  375293 ssh_runner.go:195] Run: cat /version.json
	I0108 22:15:52.314185  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:15:52.316938  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317214  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317363  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.317398  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:52.317425  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317456  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:52.317599  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.317746  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:15:52.317803  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.317882  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:15:52.318074  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.318074  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:15:52.318273  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.318332  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:15:52.451292  375293 ssh_runner.go:195] Run: systemctl --version
	I0108 22:15:52.459839  375293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:15:52.609989  375293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:15:52.617215  375293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:15:52.617326  375293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:15:52.633017  375293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:15:52.633068  375293 start.go:475] detecting cgroup driver to use...
	I0108 22:15:52.633180  375293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:15:52.649947  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:15:52.664459  375293 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:15:52.664530  375293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:15:52.680105  375293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:15:52.696100  375293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:15:52.814616  375293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:15:52.951975  375293 docker.go:219] disabling docker service ...
	I0108 22:15:52.952086  375293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:15:52.967800  375293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:15:52.982903  375293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:15:53.107033  375293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:15:53.222765  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:15:53.238572  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:15:53.260919  375293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:15:53.261035  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.271980  375293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:15:53.272084  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.283693  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.298686  375293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:15:53.310543  375293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:15:53.322108  375293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:15:53.331904  375293 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:15:53.331982  375293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:15:53.347091  375293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:15:53.358365  375293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:15:53.462607  375293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:15:53.658267  375293 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:15:53.658362  375293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:15:53.663859  375293 start.go:543] Will wait 60s for crictl version
	I0108 22:15:53.663941  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:15:53.668413  375293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:15:53.714319  375293 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:15:53.714456  375293 ssh_runner.go:195] Run: crio --version
	I0108 22:15:53.774601  375293 ssh_runner.go:195] Run: crio --version
	I0108 22:15:53.840055  375293 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:15:50.282005  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:50.282118  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:50.296034  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:50.781676  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:50.781865  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:50.794250  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:51.281771  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:51.281866  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:51.296593  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:51.782094  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:51.782193  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:51.797110  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.281711  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:52.281844  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:52.294916  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.782076  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:52.782193  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:52.796700  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:53.282191  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:53.282320  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:53.300226  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:53.781708  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:53.781807  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:53.794426  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:54.281901  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:54.282005  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:54.305276  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:54.781646  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:54.781765  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:54.798991  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:52.336203  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Start
	I0108 22:15:52.336440  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring networks are active...
	I0108 22:15:52.337318  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring network default is active
	I0108 22:15:52.337660  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Ensuring network mk-default-k8s-diff-port-292054 is active
	I0108 22:15:52.338019  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Getting domain xml...
	I0108 22:15:52.338689  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Creating domain...
	I0108 22:15:53.715046  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting to get IP...
	I0108 22:15:53.716237  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.716849  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.716944  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:53.716801  376345 retry.go:31] will retry after 252.013763ms: waiting for machine to come up
	I0108 22:15:53.970408  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.971019  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:53.971049  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:53.970958  376345 retry.go:31] will retry after 266.473735ms: waiting for machine to come up
	I0108 22:15:54.239763  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.240226  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.240251  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:54.240173  376345 retry.go:31] will retry after 429.57645ms: waiting for machine to come up
	I0108 22:15:54.672202  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.672716  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:54.672752  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:54.672669  376345 retry.go:31] will retry after 585.093805ms: waiting for machine to come up
	I0108 22:15:55.259153  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.259706  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.259743  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:55.259654  376345 retry.go:31] will retry after 689.434093ms: waiting for machine to come up
	I0108 22:15:55.950610  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.951205  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:55.951239  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:55.951157  376345 retry.go:31] will retry after 895.874654ms: waiting for machine to come up
	I0108 22:15:53.841949  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetIP
	I0108 22:15:53.845797  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:53.846200  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:15:53.846248  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:15:53.846494  375293 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0108 22:15:53.851791  375293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:53.866130  375293 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:15:53.866207  375293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:53.932186  375293 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:15:53.932311  375293 ssh_runner.go:195] Run: which lz4
	I0108 22:15:53.937259  375293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:15:53.944022  375293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:15:53.944077  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:15:55.993976  375293 crio.go:444] Took 2.056742 seconds to copy over tarball
	I0108 22:15:55.994073  375293 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:15:55.281653  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:55.281788  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:55.303179  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:55.781655  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:55.781803  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:55.801287  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:56.281804  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:56.281897  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:56.306479  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:56.782123  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:56.782248  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:56.799241  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:57.281778  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:57.281926  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:57.299917  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:57.782255  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:57.782392  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:57.797960  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:58.282738  375205 api_server.go:166] Checking apiserver status ...
	I0108 22:15:58.282919  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:15:58.300271  375205 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:15:58.300333  375205 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:15:58.300349  375205 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:15:58.300365  375205 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:15:58.300452  375205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:15:58.353658  375205 cri.go:89] found id: ""
	I0108 22:15:58.353755  375205 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:15:58.372503  375205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:15:58.393266  375205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:15:58.393366  375205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:15:58.406210  375205 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:15:58.406255  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:58.570457  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:59.811449  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.240942109s)
	I0108 22:15:59.811494  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:15:56.848455  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:56.848893  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:56.848925  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:56.848869  376345 retry.go:31] will retry after 1.095460706s: waiting for machine to come up
	I0108 22:15:57.946534  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:57.947045  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:57.947084  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:57.947000  376345 retry.go:31] will retry after 975.046116ms: waiting for machine to come up
	I0108 22:15:58.923872  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:15:58.924402  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:15:58.924436  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:15:58.924351  376345 retry.go:31] will retry after 1.855498831s: waiting for machine to come up
	I0108 22:16:00.781295  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:00.781813  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:00.781842  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:00.781745  376345 retry.go:31] will retry after 1.560909915s: waiting for machine to come up
	I0108 22:15:59.648230  375293 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.654100182s)
	I0108 22:15:59.648275  375293 crio.go:451] Took 3.654264 seconds to extract the tarball
	I0108 22:15:59.648293  375293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:15:59.707614  375293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:15:59.763291  375293 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:15:59.763318  375293 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:15:59.763416  375293 ssh_runner.go:195] Run: crio config
	I0108 22:15:59.840951  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:15:59.840986  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:15:59.841015  375293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:15:59.841038  375293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-903819 NodeName:embed-certs-903819 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:15:59.841205  375293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-903819"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:15:59.841283  375293 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-903819 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-903819 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:15:59.841341  375293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:15:59.854399  375293 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:15:59.854521  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:15:59.864630  375293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0108 22:15:59.887590  375293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:15:59.907618  375293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0108 22:15:59.930429  375293 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I0108 22:15:59.935347  375293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:15:59.954840  375293 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819 for IP: 192.168.72.132
	I0108 22:15:59.954893  375293 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:15:59.955092  375293 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:15:59.955151  375293 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:15:59.955277  375293 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/client.key
	I0108 22:15:59.955460  375293 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.key.b7fe571d
	I0108 22:15:59.955557  375293 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.key
	I0108 22:15:59.955780  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:15:59.955832  375293 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:15:59.955855  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:15:59.955897  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:15:59.955931  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:15:59.955962  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:15:59.956023  375293 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:15:59.957003  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:15:59.984268  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:16:00.018065  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:00.049758  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/embed-certs-903819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:00.079731  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:00.115904  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:00.148655  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:00.186204  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:00.224356  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:00.258906  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:00.293420  375293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:00.328219  375293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:00.351811  375293 ssh_runner.go:195] Run: openssl version
	I0108 22:16:00.360327  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:00.373384  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.381553  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.381653  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:00.391609  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:00.406242  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:00.419455  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.426093  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.426218  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:00.433793  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:00.446550  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:00.463174  375293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.470386  375293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.470471  375293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:00.477752  375293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:00.492003  375293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:00.498273  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:00.506305  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:00.515120  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:00.523909  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:00.531966  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:00.540080  375293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:00.547673  375293 kubeadm.go:404] StartCluster: {Name:embed-certs-903819 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-903819 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:00.547852  375293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:00.547933  375293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:00.596555  375293 cri.go:89] found id: ""
	I0108 22:16:00.596644  375293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:00.607985  375293 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:00.608023  375293 kubeadm.go:636] restartCluster start
	I0108 22:16:00.608092  375293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:00.620669  375293 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:00.621860  375293 kubeconfig.go:92] found "embed-certs-903819" server: "https://192.168.72.132:8443"
	I0108 22:16:00.624246  375293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:00.638481  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:00.638578  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:00.658261  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:01.138670  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:01.138876  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:01.154778  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:01.639152  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:01.639290  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:01.659301  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:02.138679  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:02.138871  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:02.159427  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:02.638859  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:02.638970  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:02.660608  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:00.076906  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:00.244500  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:00.356164  375205 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:00.356290  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:00.856674  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:01.356420  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:01.857416  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:02.356778  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:02.857385  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:03.356493  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:03.379896  375205 api_server.go:72] duration metric: took 3.023730091s to wait for apiserver process to appear ...
	I0108 22:16:03.379953  375205 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:03.380023  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:02.344786  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:02.345408  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:02.345444  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:02.345339  376345 retry.go:31] will retry after 2.336202352s: waiting for machine to come up
	I0108 22:16:04.685192  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:04.685894  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:04.685947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:04.685809  376345 retry.go:31] will retry after 3.559467663s: waiting for machine to come up
	I0108 22:16:03.139113  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:03.139272  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:03.158043  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:03.638583  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:03.638737  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:03.659573  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:04.139075  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:04.139225  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:04.158993  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:04.638600  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:04.638766  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:04.657099  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:05.138627  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:05.138728  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:05.156654  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:05.639289  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:05.639436  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:05.658060  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:06.139303  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:06.139466  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:06.153866  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:06.638492  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:06.638651  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:06.656088  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.138685  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:07.138840  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:07.158365  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.638744  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:07.638838  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:07.656010  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:07.463229  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:07.463273  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:07.463299  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:07.534774  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:07.534812  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:07.880243  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:07.886835  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:07.886881  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:08.380688  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:08.385776  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:08.385821  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:08.880979  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:08.890142  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:08.890180  375205 api_server.go:103] status: https://192.168.61.153:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:09.380526  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:16:09.385856  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 200:
	ok
	I0108 22:16:09.394800  375205 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:16:09.394838  375205 api_server.go:131] duration metric: took 6.014875532s to wait for apiserver health ...
	I0108 22:16:09.394851  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:16:09.394861  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:09.396785  375205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:09.398197  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:09.422683  375205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:09.464557  375205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:09.483416  375205 system_pods.go:59] 8 kube-system pods found
	I0108 22:16:09.483460  375205 system_pods.go:61] "coredns-76f75df574-v8fsw" [7d69f8ec-6684-49d0-8567-4032298a4e5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:09.483471  375205 system_pods.go:61] "etcd-no-preload-675668" [bc088c6e-5037-4e51-a021-2c5ac3c1c60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:09.483488  375205 system_pods.go:61] "kube-apiserver-no-preload-675668" [0bbdf118-c47c-4298-ae5e-a984729ec21e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:09.483497  375205 system_pods.go:61] "kube-controller-manager-no-preload-675668" [2c3ff259-60a7-4205-b55f-85fe2d8e340d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:09.483513  375205 system_pods.go:61] "kube-proxy-dnbvk" [1803ec6b-5bd3-4ebb-bfd5-3a1356a1f168] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:09.483522  375205 system_pods.go:61] "kube-scheduler-no-preload-675668" [47737c5e-b59a-4df0-ac7c-36525e17733c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:09.483532  375205 system_pods.go:61] "metrics-server-57f55c9bc5-pk8bm" [71c7c744-c5fa-41e7-a26f-c04c30379b97] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:09.483537  375205 system_pods.go:61] "storage-provisioner" [1266430c-beda-4fa1-a057-cb07b8bf1f89] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:09.483547  375205 system_pods.go:74] duration metric: took 18.952011ms to wait for pod list to return data ...
	I0108 22:16:09.483562  375205 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:09.502939  375205 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:09.502989  375205 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:09.503007  375205 node_conditions.go:105] duration metric: took 19.439582ms to run NodePressure ...
	I0108 22:16:09.503031  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:08.246675  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:08.247243  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | unable to find current IP address of domain default-k8s-diff-port-292054 in network mk-default-k8s-diff-port-292054
	I0108 22:16:08.247302  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | I0108 22:16:08.247185  376345 retry.go:31] will retry after 3.860632675s: waiting for machine to come up
	I0108 22:16:08.139286  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:08.139413  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:08.155694  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:08.639385  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:08.639521  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:08.655368  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:09.139022  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:09.139171  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:09.153512  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:09.638642  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:09.638765  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:09.653202  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.138833  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:10.138924  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:10.153529  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.639273  375293 api_server.go:166] Checking apiserver status ...
	I0108 22:16:10.639462  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:10.655947  375293 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:10.655981  375293 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:10.655991  375293 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:10.656003  375293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:10.656082  375293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:10.706638  375293 cri.go:89] found id: ""
	I0108 22:16:10.706721  375293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:10.726540  375293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:10.739540  375293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:10.739619  375293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:10.751112  375293 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:10.751158  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:10.877306  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.453755  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.664034  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.778440  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:11.866216  375293 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:11.866364  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:12.366749  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.862826  374880 start.go:369] acquired machines lock for "old-k8s-version-079759" in 1m1.534060396s
	I0108 22:16:13.862908  374880 start.go:96] Skipping create...Using existing machine configuration
	I0108 22:16:13.862922  374880 fix.go:54] fixHost starting: 
	I0108 22:16:13.863465  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:16:13.863514  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:16:13.890658  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0108 22:16:13.891256  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:16:13.891974  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:16:13.891997  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:16:13.892356  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:16:13.892526  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:13.892634  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:16:13.894503  374880 fix.go:102] recreateIfNeeded on old-k8s-version-079759: state=Stopped err=<nil>
	I0108 22:16:13.894532  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	W0108 22:16:13.894707  374880 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 22:16:13.896778  374880 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-079759" ...
	I0108 22:16:13.898346  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Start
	I0108 22:16:13.898517  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring networks are active...
	I0108 22:16:13.899441  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring network default is active
	I0108 22:16:13.899906  374880 main.go:141] libmachine: (old-k8s-version-079759) Ensuring network mk-old-k8s-version-079759 is active
	I0108 22:16:13.900424  374880 main.go:141] libmachine: (old-k8s-version-079759) Getting domain xml...
	I0108 22:16:13.901232  374880 main.go:141] libmachine: (old-k8s-version-079759) Creating domain...
	I0108 22:16:10.069721  375205 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:10.077465  375205 kubeadm.go:787] kubelet initialised
	I0108 22:16:10.077494  375205 kubeadm.go:788] duration metric: took 7.739231ms waiting for restarted kubelet to initialise ...
	I0108 22:16:10.077503  375205 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:10.085099  375205 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-v8fsw" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:12.095498  375205 pod_ready.go:102] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:14.100054  375205 pod_ready.go:102] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:12.111578  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.112089  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Found IP for machine: 192.168.50.18
	I0108 22:16:12.112118  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Reserving static IP address...
	I0108 22:16:12.112138  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has current primary IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.112627  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-292054", mac: "52:54:00:8d:25:78", ip: "192.168.50.18"} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.112660  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Reserved static IP address: 192.168.50.18
	I0108 22:16:12.112684  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | skip adding static IP to network mk-default-k8s-diff-port-292054 - found existing host DHCP lease matching {name: "default-k8s-diff-port-292054", mac: "52:54:00:8d:25:78", ip: "192.168.50.18"}
	I0108 22:16:12.112706  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Getting to WaitForSSH function...
	I0108 22:16:12.112729  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Waiting for SSH to be available...
	I0108 22:16:12.115245  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.115723  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.115762  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.115881  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Using SSH client type: external
	I0108 22:16:12.115917  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa (-rw-------)
	I0108 22:16:12.115947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:16:12.115967  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | About to run SSH command:
	I0108 22:16:12.116013  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | exit 0
	I0108 22:16:12.221209  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | SSH cmd err, output: <nil>: 
	I0108 22:16:12.221755  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetConfigRaw
	I0108 22:16:12.222634  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:12.225565  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.226008  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.226036  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.226326  375556 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/config.json ...
	I0108 22:16:12.226626  375556 machine.go:88] provisioning docker machine ...
	I0108 22:16:12.226658  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:12.226946  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.227160  375556 buildroot.go:166] provisioning hostname "default-k8s-diff-port-292054"
	I0108 22:16:12.227187  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.227381  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.230424  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.230867  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.230918  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.231036  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.231302  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.231511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.231674  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.231856  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:12.232448  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:12.232476  375556 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-292054 && echo "default-k8s-diff-port-292054" | sudo tee /etc/hostname
	I0108 22:16:12.382972  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-292054
	
	I0108 22:16:12.383015  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.386658  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.387055  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.387110  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.387410  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.387780  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.388020  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.388284  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.388576  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:12.388935  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:12.388954  375556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-292054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-292054/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-292054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:12.536473  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:12.536514  375556 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:16:12.536597  375556 buildroot.go:174] setting up certificates
	I0108 22:16:12.536619  375556 provision.go:83] configureAuth start
	I0108 22:16:12.536638  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetMachineName
	I0108 22:16:12.536995  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:12.540248  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.540775  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.540813  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.540982  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.544343  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.544924  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.544986  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.545143  375556 provision.go:138] copyHostCerts
	I0108 22:16:12.545241  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:16:12.545256  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:16:12.545329  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:16:12.545468  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:16:12.545485  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:16:12.545525  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:16:12.545603  375556 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:16:12.545612  375556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:16:12.545630  375556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:16:12.545717  375556 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-292054 san=[192.168.50.18 192.168.50.18 localhost 127.0.0.1 minikube default-k8s-diff-port-292054]
	I0108 22:16:12.853268  375556 provision.go:172] copyRemoteCerts
	I0108 22:16:12.853332  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:12.853359  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:12.856503  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.856926  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:12.856959  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:12.857295  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:12.857536  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:12.857699  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:12.857904  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:12.961751  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:12.999065  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 22:16:13.037282  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:16:13.075006  375556 provision.go:86] duration metric: configureAuth took 538.367435ms
	I0108 22:16:13.075048  375556 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:13.075403  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:16:13.075509  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.078643  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.079141  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.079187  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.079518  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.079765  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.079976  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.080145  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.080388  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:13.080860  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:13.080891  375556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:16:13.523316  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:16:13.523355  375556 machine.go:91] provisioned docker machine in 1.296708962s
	I0108 22:16:13.523391  375556 start.go:300] post-start starting for "default-k8s-diff-port-292054" (driver="kvm2")
	I0108 22:16:13.523427  375556 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:16:13.523458  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.523937  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:16:13.523982  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.528392  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.528941  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.529005  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.529344  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.529715  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.529947  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.530160  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:13.644605  375556 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:16:13.651917  375556 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:16:13.651970  375556 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:16:13.652120  375556 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:16:13.652268  375556 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:16:13.652452  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:16:13.667715  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:13.707995  375556 start.go:303] post-start completed in 184.580746ms
	I0108 22:16:13.708032  375556 fix.go:56] fixHost completed within 21.398677633s
	I0108 22:16:13.708061  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.712186  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.712754  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.712785  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.713001  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.713308  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.713572  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.713784  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.714062  375556 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:13.714576  375556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.18 22 <nil> <nil>}
	I0108 22:16:13.714597  375556 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:16:13.862558  375556 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752173.800899341
	
	I0108 22:16:13.862600  375556 fix.go:206] guest clock: 1704752173.800899341
	I0108 22:16:13.862613  375556 fix.go:219] Guest: 2024-01-08 22:16:13.800899341 +0000 UTC Remote: 2024-01-08 22:16:13.708038237 +0000 UTC m=+267.678081968 (delta=92.861104ms)
	I0108 22:16:13.862688  375556 fix.go:190] guest clock delta is within tolerance: 92.861104ms
	I0108 22:16:13.862700  375556 start.go:83] releasing machines lock for "default-k8s-diff-port-292054", held for 21.553389859s
	I0108 22:16:13.862760  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.863344  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:13.867702  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.868132  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.868160  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.868553  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869294  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869606  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:16:13.869710  375556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:16:13.869908  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.870024  375556 ssh_runner.go:195] Run: cat /version.json
	I0108 22:16:13.870055  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:16:13.874047  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.874604  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.874637  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876082  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876102  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.876135  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:13.876339  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:13.876083  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:16:13.876354  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.876518  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:16:13.876771  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.876808  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:16:13.876928  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:13.877140  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:16:14.020544  375556 ssh_runner.go:195] Run: systemctl --version
	I0108 22:16:14.030180  375556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:16:14.192218  375556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:16:14.200925  375556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:16:14.201038  375556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:16:14.223169  375556 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:16:14.223200  375556 start.go:475] detecting cgroup driver to use...
	I0108 22:16:14.223274  375556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:16:14.246782  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:16:14.264283  375556 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:16:14.264417  375556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:16:14.281460  375556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:16:14.295968  375556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:16:14.443907  375556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:16:14.611299  375556 docker.go:219] disabling docker service ...
	I0108 22:16:14.611425  375556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:16:14.630493  375556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:16:14.649912  375556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:16:14.787666  375556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:16:14.971826  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:16:15.004969  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:16:15.032889  375556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:16:15.032982  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.050131  375556 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:16:15.050223  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.066011  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.082365  375556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:15.098387  375556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:16:15.115648  375556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:16:15.129675  375556 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:16:15.129848  375556 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:16:15.151333  375556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:16:15.165637  375556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:16:15.308416  375556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:16:15.580204  375556 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:16:15.580284  375556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:16:15.587895  375556 start.go:543] Will wait 60s for crictl version
	I0108 22:16:15.588108  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:16:15.594471  375556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:16:15.645175  375556 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:16:15.645273  375556 ssh_runner.go:195] Run: crio --version
	I0108 22:16:15.707630  375556 ssh_runner.go:195] Run: crio --version
	I0108 22:16:15.779275  375556 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:16:15.781032  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetIP
	I0108 22:16:15.784486  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:15.784896  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:16:15.784965  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:16:15.785126  375556 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0108 22:16:15.790707  375556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:15.810441  375556 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:16:15.810515  375556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:15.867423  375556 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:16:15.867591  375556 ssh_runner.go:195] Run: which lz4
	I0108 22:16:15.873029  375556 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:16:15.879394  375556 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:16:15.879500  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:16:12.867258  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.367211  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:13.866433  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.366622  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.866611  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:14.907073  375293 api_server.go:72] duration metric: took 3.040854669s to wait for apiserver process to appear ...
	I0108 22:16:14.907116  375293 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:14.907141  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:15.738179  374880 main.go:141] libmachine: (old-k8s-version-079759) Waiting to get IP...
	I0108 22:16:15.739231  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:15.739808  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:15.739893  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:15.739787  376492 retry.go:31] will retry after 271.587986ms: waiting for machine to come up
	I0108 22:16:16.013648  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.014344  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.014388  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.014267  376492 retry.go:31] will retry after 376.425749ms: waiting for machine to come up
	I0108 22:16:16.392497  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.392985  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.393013  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.392894  376492 retry.go:31] will retry after 340.776058ms: waiting for machine to come up
	I0108 22:16:16.735696  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:16.736412  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:16.736452  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:16.736349  376492 retry.go:31] will retry after 559.6759ms: waiting for machine to come up
	I0108 22:16:17.297397  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:17.297990  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:17.298027  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:17.297965  376492 retry.go:31] will retry after 738.214425ms: waiting for machine to come up
	I0108 22:16:18.038578  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:18.039239  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:18.039269  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:18.039120  376492 retry.go:31] will retry after 762.268706ms: waiting for machine to come up
	I0108 22:16:18.803986  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:18.804560  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:18.804589  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:18.804438  376492 retry.go:31] will retry after 1.027542644s: waiting for machine to come up
	I0108 22:16:15.104174  375205 pod_ready.go:92] pod "coredns-76f75df574-v8fsw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:15.104208  375205 pod_ready.go:81] duration metric: took 5.01907031s waiting for pod "coredns-76f75df574-v8fsw" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:15.104223  375205 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:17.117526  375205 pod_ready.go:102] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:19.615842  375205 pod_ready.go:102] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:17.971748  375556 crio.go:444] Took 2.098761 seconds to copy over tarball
	I0108 22:16:17.971905  375556 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:16:19.481826  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:19.481865  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:19.481883  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:19.529381  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:19.529427  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:19.907613  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:19.914772  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:19.914824  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:20.407461  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:20.418184  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:20.418238  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:20.908072  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:20.920042  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:20.920085  375293 api_server.go:103] status: https://192.168.72.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:21.407506  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:16:21.414375  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I0108 22:16:21.428398  375293 api_server.go:141] control plane version: v1.28.4
	I0108 22:16:21.428439  375293 api_server.go:131] duration metric: took 6.521312808s to wait for apiserver health ...
	I0108 22:16:21.428451  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:16:21.428460  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:21.920874  375293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:22.268512  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:22.284953  375293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:22.309346  375293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:22.465452  375293 system_pods.go:59] 9 kube-system pods found
	I0108 22:16:22.465501  375293 system_pods.go:61] "coredns-5dd5756b68-wxfs6" [965cab31-c39a-4885-bc6f-6575fe026794] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:22.465516  375293 system_pods.go:61] "coredns-5dd5756b68-zbjfn" [1b521296-8e4c-4252-a729-5727cd71d3f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:22.465534  375293 system_pods.go:61] "etcd-embed-certs-903819" [be30d1b3-e4a8-4daf-9c0e-f3b776499471] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:22.465546  375293 system_pods.go:61] "kube-apiserver-embed-certs-903819" [530546d9-1cec-45f5-9e3e-f5d08e913cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:22.465563  375293 system_pods.go:61] "kube-controller-manager-embed-certs-903819" [bb0d60c9-cdaf-491d-aa20-5a522f351e17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:22.465573  375293 system_pods.go:61] "kube-proxy-gjlx8" [9247e922-69de-4e59-a6d2-06c791d43031] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:22.465586  375293 system_pods.go:61] "kube-scheduler-embed-certs-903819" [1aa50057-5aa4-44b2-a762-6f0eee5b3856] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:22.465602  375293 system_pods.go:61] "metrics-server-57f55c9bc5-jswgz" [8f18e01f-981d-48fe-9ce6-5155794da657] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:22.465614  375293 system_pods.go:61] "storage-provisioner" [ea2ac609-5857-4597-9432-e2f4f4630ee2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:22.465629  375293 system_pods.go:74] duration metric: took 156.242171ms to wait for pod list to return data ...
	I0108 22:16:22.465643  375293 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:22.523465  375293 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:22.523529  375293 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:22.523552  375293 node_conditions.go:105] duration metric: took 57.897769ms to run NodePressure ...
	I0108 22:16:22.523585  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:19.833814  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:19.834296  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:19.834341  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:19.834229  376492 retry.go:31] will retry after 1.469300536s: waiting for machine to come up
	I0108 22:16:21.305138  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:21.305962  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:21.306001  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:21.305834  376492 retry.go:31] will retry after 1.215696449s: waiting for machine to come up
	I0108 22:16:22.523937  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:22.524780  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:22.524813  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:22.524676  376492 retry.go:31] will retry after 1.652609537s: waiting for machine to come up
	I0108 22:16:24.179958  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:24.180881  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:24.180910  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:24.180780  376492 retry.go:31] will retry after 2.03835476s: waiting for machine to come up
	I0108 22:16:21.115112  375205 pod_ready.go:92] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.115153  375205 pod_ready.go:81] duration metric: took 6.010921481s waiting for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.115169  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.130056  375205 pod_ready.go:92] pod "kube-apiserver-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.130113  375205 pod_ready.go:81] duration metric: took 14.932775ms waiting for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.130137  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.149011  375205 pod_ready.go:92] pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.149054  375205 pod_ready.go:81] duration metric: took 18.905543ms waiting for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.149071  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dnbvk" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.162994  375205 pod_ready.go:92] pod "kube-proxy-dnbvk" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.163037  375205 pod_ready.go:81] duration metric: took 13.956516ms waiting for pod "kube-proxy-dnbvk" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.163053  375205 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.172926  375205 pod_ready.go:92] pod "kube-scheduler-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:21.172975  375205 pod_ready.go:81] duration metric: took 9.906476ms waiting for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:21.172991  375205 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:23.182086  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:22.162439  375556 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.190451334s)
	I0108 22:16:22.162503  375556 crio.go:451] Took 4.190696 seconds to extract the tarball
	I0108 22:16:22.162522  375556 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:16:22.212617  375556 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:22.290948  375556 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:16:22.290982  375556 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:16:22.291067  375556 ssh_runner.go:195] Run: crio config
	I0108 22:16:22.361099  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:16:22.361135  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:22.361166  375556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:16:22.361192  375556 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.18 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-292054 NodeName:default-k8s-diff-port-292054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:16:22.361488  375556 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.18
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-292054"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:16:22.361599  375556 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-292054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 22:16:22.361681  375556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:16:22.376350  375556 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:16:22.376489  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:16:22.389808  375556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0108 22:16:22.414305  375556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:16:22.433716  375556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0108 22:16:22.461925  375556 ssh_runner.go:195] Run: grep 192.168.50.18	control-plane.minikube.internal$ /etc/hosts
	I0108 22:16:22.467236  375556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:22.484487  375556 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054 for IP: 192.168.50.18
	I0108 22:16:22.484537  375556 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:16:22.484688  375556 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:16:22.484724  375556 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:16:22.484794  375556 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/client.key
	I0108 22:16:22.484845  375556 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.key.4ed28ecc
	I0108 22:16:22.484886  375556 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.key
	I0108 22:16:22.485012  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:16:22.485042  375556 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:16:22.485056  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:16:22.485077  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:16:22.485107  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:16:22.485133  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:16:22.485182  375556 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:22.485917  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:16:22.516640  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:16:22.554723  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:22.589730  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:22.624933  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:22.656950  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:22.691213  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:22.725882  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:22.757465  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:22.789479  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:22.818877  375556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:22.848834  375556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:22.869951  375556 ssh_runner.go:195] Run: openssl version
	I0108 22:16:22.877921  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:22.892998  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.899697  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.899798  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:22.906225  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:22.918957  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:22.930809  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.937461  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.937595  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:22.945257  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:22.956453  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:22.969894  375556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.976162  375556 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.976249  375556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:22.983601  375556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:22.995487  375556 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:23.002869  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:23.011231  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:23.019450  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:23.028645  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:23.036530  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:23.044216  375556 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:23.050779  375556 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-292054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-292054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:23.050875  375556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:23.050968  375556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:23.098736  375556 cri.go:89] found id: ""
	I0108 22:16:23.098806  375556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:23.110702  375556 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:23.110738  375556 kubeadm.go:636] restartCluster start
	I0108 22:16:23.110807  375556 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:23.122131  375556 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.124018  375556 kubeconfig.go:92] found "default-k8s-diff-port-292054" server: "https://192.168.50.18:8444"
	I0108 22:16:23.127827  375556 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:23.141921  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:23.142029  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:23.155738  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.642320  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:23.642416  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:23.655783  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:24.142361  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:24.142522  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:24.161739  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:24.642247  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:24.642392  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:24.659564  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:25.142097  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:25.142341  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:25.156773  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:25.642249  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:25.642362  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:25.655785  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:23.802042  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.278422708s)
	I0108 22:16:23.802099  375293 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:23.816719  375293 kubeadm.go:787] kubelet initialised
	I0108 22:16:23.816770  375293 kubeadm.go:788] duration metric: took 14.659036ms waiting for restarted kubelet to initialise ...
	I0108 22:16:23.816787  375293 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:23.831999  375293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:25.843652  375293 pod_ready.go:102] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:26.220729  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:26.221388  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:26.221424  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:26.221322  376492 retry.go:31] will retry after 2.215929666s: waiting for machine to come up
	I0108 22:16:28.440185  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:28.440859  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:28.440894  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:28.440781  376492 retry.go:31] will retry after 4.455149908s: waiting for machine to come up
	I0108 22:16:25.184929  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:27.682851  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:29.685033  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:26.142553  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:26.142728  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:26.160691  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:26.642356  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:26.642469  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:26.656481  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.142104  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:27.142265  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:27.157378  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.642473  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:27.642577  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:27.656662  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:28.142925  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:28.143080  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:28.160815  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:28.642072  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:28.642188  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:28.662580  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:29.142008  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:29.142158  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:29.161132  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:29.642780  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:29.642919  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:29.661247  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:30.142588  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:30.142747  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:30.159262  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:30.642472  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:30.642650  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:30.659741  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:27.847129  375293 pod_ready.go:102] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:30.347456  375293 pod_ready.go:92] pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:30.347490  375293 pod_ready.go:81] duration metric: took 6.51546229s waiting for pod "coredns-5dd5756b68-wxfs6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.347501  375293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.354929  375293 pod_ready.go:92] pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:30.354955  375293 pod_ready.go:81] duration metric: took 7.447354ms waiting for pod "coredns-5dd5756b68-zbjfn" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:30.354965  375293 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.867755  375293 pod_ready.go:92] pod "etcd-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.867788  375293 pod_ready.go:81] duration metric: took 1.512815387s waiting for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.867801  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.875662  375293 pod_ready.go:92] pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.875711  375293 pod_ready.go:81] duration metric: took 7.899159ms waiting for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.875730  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.885348  375293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.885395  375293 pod_ready.go:81] duration metric: took 9.655438ms waiting for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.885410  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gjlx8" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.943389  375293 pod_ready.go:92] pod "kube-proxy-gjlx8" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:31.943424  375293 pod_ready.go:81] duration metric: took 58.006295ms waiting for pod "kube-proxy-gjlx8" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:31.943435  375293 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.337716  375293 pod_ready.go:92] pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:16:32.337752  375293 pod_ready.go:81] duration metric: took 394.305103ms waiting for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.337763  375293 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:32.901098  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:32.901564  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | unable to find current IP address of domain old-k8s-version-079759 in network mk-old-k8s-version-079759
	I0108 22:16:32.901601  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | I0108 22:16:32.901488  376492 retry.go:31] will retry after 3.655042594s: waiting for machine to come up
	I0108 22:16:32.182102  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:34.685634  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:31.142410  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:31.142532  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:31.156191  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:31.642990  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:31.643137  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:31.656623  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:32.142116  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:32.142225  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:32.155597  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:32.642804  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:32.642897  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:32.656038  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:33.142630  375556 api_server.go:166] Checking apiserver status ...
	I0108 22:16:33.142742  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:33.155977  375556 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:33.156022  375556 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:33.156049  375556 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:33.156064  375556 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:33.156127  375556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:33.205442  375556 cri.go:89] found id: ""
	I0108 22:16:33.205556  375556 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:33.225775  375556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:33.236014  375556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:33.236122  375556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:33.246331  375556 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:33.246385  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:33.389338  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.044093  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.279910  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.436859  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:34.536169  375556 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:34.536274  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:35.036740  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:35.536732  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:36.036604  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:34.346227  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.347971  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.558150  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.558817  374880 main.go:141] libmachine: (old-k8s-version-079759) Found IP for machine: 192.168.39.183
	I0108 22:16:36.558839  374880 main.go:141] libmachine: (old-k8s-version-079759) Reserving static IP address...
	I0108 22:16:36.558855  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has current primary IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.559397  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "old-k8s-version-079759", mac: "52:54:00:79:02:7b", ip: "192.168.39.183"} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.559451  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | skip adding static IP to network mk-old-k8s-version-079759 - found existing host DHCP lease matching {name: "old-k8s-version-079759", mac: "52:54:00:79:02:7b", ip: "192.168.39.183"}
	I0108 22:16:36.559471  374880 main.go:141] libmachine: (old-k8s-version-079759) Reserved static IP address: 192.168.39.183
	I0108 22:16:36.559495  374880 main.go:141] libmachine: (old-k8s-version-079759) Waiting for SSH to be available...
	I0108 22:16:36.559511  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Getting to WaitForSSH function...
	I0108 22:16:36.562077  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.562439  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.562496  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.562806  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Using SSH client type: external
	I0108 22:16:36.562846  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa (-rw-------)
	I0108 22:16:36.562938  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:16:36.562985  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | About to run SSH command:
	I0108 22:16:36.563005  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | exit 0
	I0108 22:16:36.655957  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | SSH cmd err, output: <nil>: 
	I0108 22:16:36.656393  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetConfigRaw
	I0108 22:16:36.657349  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:36.660624  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.661056  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.661097  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.661415  374880 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/config.json ...
	I0108 22:16:36.661673  374880 machine.go:88] provisioning docker machine ...
	I0108 22:16:36.661699  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:36.662007  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.662224  374880 buildroot.go:166] provisioning hostname "old-k8s-version-079759"
	I0108 22:16:36.662249  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.662416  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.665572  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.666013  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.666056  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.666311  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:36.666582  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.666770  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.666945  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:36.667141  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:36.667677  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:36.667700  374880 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-079759 && echo "old-k8s-version-079759" | sudo tee /etc/hostname
	I0108 22:16:36.813113  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-079759
	
	I0108 22:16:36.813174  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.816444  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.816774  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.816814  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.816995  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:36.817323  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.817559  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:36.817739  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:36.817969  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:36.818431  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:36.818461  374880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-079759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-079759/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-079759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:16:36.952252  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:16:36.952306  374880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:16:36.952343  374880 buildroot.go:174] setting up certificates
	I0108 22:16:36.952359  374880 provision.go:83] configureAuth start
	I0108 22:16:36.952372  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetMachineName
	I0108 22:16:36.952803  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:36.955895  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.956276  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.956310  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.956579  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:36.959251  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.959667  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:36.959723  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:36.959825  374880 provision.go:138] copyHostCerts
	I0108 22:16:36.959896  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:16:36.959909  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:16:36.959987  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:16:36.960106  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:16:36.960122  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:16:36.960152  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:16:36.960240  374880 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:16:36.960251  374880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:16:36.960286  374880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:16:36.960370  374880 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-079759 san=[192.168.39.183 192.168.39.183 localhost 127.0.0.1 minikube old-k8s-version-079759]
	I0108 22:16:37.054312  374880 provision.go:172] copyRemoteCerts
	I0108 22:16:37.054396  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:16:37.054428  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.058048  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.058545  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.058580  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.058823  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.059165  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.059439  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.059614  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.158033  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:16:37.190220  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:16:37.219035  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 22:16:37.246894  374880 provision.go:86] duration metric: configureAuth took 294.516334ms
	I0108 22:16:37.246938  374880 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:16:37.247165  374880 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:16:37.247269  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.250766  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.251305  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.251344  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.251654  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.251992  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.252253  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.252456  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.252701  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:37.253066  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:37.253091  374880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:16:37.626837  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:16:37.626886  374880 machine.go:91] provisioned docker machine in 965.198968ms
	I0108 22:16:37.626899  374880 start.go:300] post-start starting for "old-k8s-version-079759" (driver="kvm2")
	I0108 22:16:37.626924  374880 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:16:37.626991  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.627562  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:16:37.627626  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.631567  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.631840  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.631876  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.632070  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.632322  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.632578  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.632749  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.732984  374880 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:16:37.740111  374880 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:16:37.740158  374880 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:16:37.740268  374880 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:16:37.740384  374880 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:16:37.740527  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:16:37.751840  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:37.780796  374880 start.go:303] post-start completed in 153.87709ms
	I0108 22:16:37.780833  374880 fix.go:56] fixHost completed within 23.917911044s
	I0108 22:16:37.780861  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.784200  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.784663  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.784698  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.784916  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.785192  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.785482  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.785652  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.785819  374880 main.go:141] libmachine: Using SSH client type: native
	I0108 22:16:37.786310  374880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0108 22:16:37.786334  374880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:16:37.908632  374880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704752197.846451761
	
	I0108 22:16:37.908664  374880 fix.go:206] guest clock: 1704752197.846451761
	I0108 22:16:37.908677  374880 fix.go:219] Guest: 2024-01-08 22:16:37.846451761 +0000 UTC Remote: 2024-01-08 22:16:37.780837729 +0000 UTC m=+368.040141999 (delta=65.614032ms)
	I0108 22:16:37.908740  374880 fix.go:190] guest clock delta is within tolerance: 65.614032ms
	I0108 22:16:37.908756  374880 start.go:83] releasing machines lock for "old-k8s-version-079759", held for 24.045885784s
	I0108 22:16:37.908801  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.909113  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:37.912363  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.912708  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.912745  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.913058  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913581  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913769  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:16:37.913860  374880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:16:37.913906  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.914052  374880 ssh_runner.go:195] Run: cat /version.json
	I0108 22:16:37.914081  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:16:37.916674  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917009  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917330  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.917371  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917433  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.917523  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:37.917545  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:37.917622  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.917791  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.917862  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:16:37.917973  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:37.918026  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:16:37.918185  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:16:37.918303  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:16:38.009398  374880 ssh_runner.go:195] Run: systemctl --version
	I0108 22:16:38.040945  374880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:16:38.191198  374880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:16:38.198405  374880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:16:38.198504  374880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:16:38.218602  374880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:16:38.218641  374880 start.go:475] detecting cgroup driver to use...
	I0108 22:16:38.218722  374880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:16:38.234161  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:16:38.250033  374880 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:16:38.250107  374880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:16:38.266262  374880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:16:38.281553  374880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:16:38.402503  374880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:16:38.558016  374880 docker.go:219] disabling docker service ...
	I0108 22:16:38.558124  374880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:16:38.573689  374880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:16:38.589002  374880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:16:38.718943  374880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:16:38.853252  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:16:38.869464  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:16:38.890384  374880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 22:16:38.890538  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.904645  374880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:16:38.904745  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.916308  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.927747  374880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:16:38.938877  374880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:16:38.951536  374880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:16:38.961810  374880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:16:38.961889  374880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:16:38.976131  374880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:16:38.990253  374880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:16:39.129313  374880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:16:39.322691  374880 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:16:39.322796  374880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:16:39.329204  374880 start.go:543] Will wait 60s for crictl version
	I0108 22:16:39.329317  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:39.333991  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:16:39.381363  374880 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:16:39.381484  374880 ssh_runner.go:195] Run: crio --version
	I0108 22:16:39.435964  374880 ssh_runner.go:195] Run: crio --version
	I0108 22:16:39.499543  374880 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0108 22:16:39.501084  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetIP
	I0108 22:16:39.504205  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:39.504541  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:16:39.504579  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:16:39.504935  374880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 22:16:39.510323  374880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:39.526998  374880 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:16:39.527057  374880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:39.577709  374880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0108 22:16:39.577793  374880 ssh_runner.go:195] Run: which lz4
	I0108 22:16:39.582925  374880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:16:39.589373  374880 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:16:39.589421  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0108 22:16:37.184707  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:39.683810  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:36.537007  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:37.037157  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:37.061202  375556 api_server.go:72] duration metric: took 2.525037167s to wait for apiserver process to appear ...
	I0108 22:16:37.061229  375556 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:16:37.061250  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:37.061790  375556 api_server.go:269] stopped: https://192.168.50.18:8444/healthz: Get "https://192.168.50.18:8444/healthz": dial tcp 192.168.50.18:8444: connect: connection refused
	I0108 22:16:37.561995  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:38.852752  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:41.361118  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:42.562614  375556 api_server.go:269] stopped: https://192.168.50.18:8444/healthz: Get "https://192.168.50.18:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 22:16:42.562680  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:42.626918  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:16:42.626956  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:16:43.061435  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:43.078776  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:43.078841  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:43.561364  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:43.575304  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:43.575397  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:44.061694  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:44.072328  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 22:16:44.072394  375556 api_server.go:103] status: https://192.168.50.18:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 22:16:44.561536  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:16:44.572055  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 200:
	ok
	I0108 22:16:44.586947  375556 api_server.go:141] control plane version: v1.28.4
	I0108 22:16:44.587011  375556 api_server.go:131] duration metric: took 7.52577273s to wait for apiserver health ...
	I0108 22:16:44.587029  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:16:44.587040  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:44.765569  375556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:16:41.520470  374880 crio.go:444] Took 1.937584 seconds to copy over tarball
	I0108 22:16:41.520541  374880 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:16:41.683864  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:44.183495  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:44.867194  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:16:44.881203  375556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:16:44.906051  375556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:16:44.958770  375556 system_pods.go:59] 8 kube-system pods found
	I0108 22:16:44.958813  375556 system_pods.go:61] "coredns-5dd5756b68-vcmh6" [4d87af85-075d-427c-b4ca-ba57421fc8de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:16:44.958823  375556 system_pods.go:61] "etcd-default-k8s-diff-port-292054" [5353bc6f-061b-414b-823b-fa224887733c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 22:16:44.958831  375556 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-292054" [aa609bfc-ba8f-4d82-bdcd-2f17e0b1b2a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 22:16:44.958838  375556 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-292054" [2500070d-a348-47a9-a1d6-525eb3ee12d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 22:16:44.958847  375556 system_pods.go:61] "kube-proxy-f4xsp" [d0987c89-c598-4ae9-a60a-bad8df066d0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:16:44.958867  375556 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-292054" [9b4e73b7-a4ff-469f-b03e-1170d068af2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 22:16:44.958883  375556 system_pods.go:61] "metrics-server-57f55c9bc5-6w57p" [7a85be99-ad7e-4866-a8d8-0972435dfd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:16:44.958899  375556 system_pods.go:61] "storage-provisioner" [4be6edbe-cb8e-4598-9d23-1cefc0afc184] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:16:44.958908  375556 system_pods.go:74] duration metric: took 52.82566ms to wait for pod list to return data ...
	I0108 22:16:44.958923  375556 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:16:44.965171  375556 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:16:44.965220  375556 node_conditions.go:123] node cpu capacity is 2
	I0108 22:16:44.965235  375556 node_conditions.go:105] duration metric: took 6.306299ms to run NodePressure ...
	I0108 22:16:44.965271  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:43.845812  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:45.851004  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:45.115268  374880 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.594690355s)
	I0108 22:16:45.115304  374880 crio.go:451] Took 3.594805 seconds to extract the tarball
	I0108 22:16:45.115316  374880 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:16:45.165012  374880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:16:45.542219  374880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0108 22:16:45.542266  374880 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:16:45.542362  374880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:45.542384  374880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.542409  374880 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 22:16:45.542451  374880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.542489  374880 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.542392  374880 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.542666  374880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.542661  374880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.543883  374880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.543921  374880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.543888  374880 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 22:16:45.543944  374880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.543888  374880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:45.543970  374880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.543895  374880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.544327  374880 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.737830  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.747956  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0108 22:16:45.780688  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.799788  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.811226  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:45.819948  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:45.857132  374880 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0108 22:16:45.857195  374880 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0108 22:16:45.857257  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.867494  374880 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0108 22:16:45.867547  374880 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0108 22:16:45.867622  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.871438  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:45.900657  374880 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0108 22:16:45.900706  374880 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:45.900755  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:45.986789  374880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0108 22:16:45.986850  374880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:45.986909  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.001283  374880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0108 22:16:46.001335  374880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:46.001389  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.009750  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0108 22:16:46.009783  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0108 22:16:46.009830  374880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0108 22:16:46.009848  374880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0108 22:16:46.009879  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0108 22:16:46.009887  374880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:46.009887  374880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:46.009904  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0108 22:16:46.009929  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.009967  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0108 22:16:46.009933  374880 ssh_runner.go:195] Run: which crictl
	I0108 22:16:46.173258  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0108 22:16:46.173293  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 22:16:46.173387  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0108 22:16:46.173402  374880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.173451  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0108 22:16:46.173458  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0108 22:16:46.173539  374880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0108 22:16:46.173588  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0108 22:16:46.238533  374880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0108 22:16:46.238562  374880 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.238589  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0108 22:16:46.238619  374880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0108 22:16:46.238692  374880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0108 22:16:46.499734  374880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:16:47.197262  374880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0108 22:16:47.197344  374880 cache_images.go:92] LoadImages completed in 1.65506117s
	W0108 22:16:47.197431  374880 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17866-334768/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0108 22:16:47.197628  374880 ssh_runner.go:195] Run: crio config
	I0108 22:16:47.273121  374880 cni.go:84] Creating CNI manager for ""
	I0108 22:16:47.273164  374880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:16:47.273206  374880 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:16:47.273242  374880 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-079759 NodeName:old-k8s-version-079759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 22:16:47.273439  374880 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-079759"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-079759
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.183:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:16:47.273557  374880 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-079759 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-079759 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:16:47.273641  374880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 22:16:47.284374  374880 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:16:47.284528  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:16:47.295740  374880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 22:16:47.317874  374880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:16:47.339820  374880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0108 22:16:47.365063  374880 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0108 22:16:47.369942  374880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:16:47.387586  374880 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759 for IP: 192.168.39.183
	I0108 22:16:47.387637  374880 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:16:47.387862  374880 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:16:47.387929  374880 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:16:47.388036  374880 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.key
	I0108 22:16:47.388144  374880 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.key.a2b84326
	I0108 22:16:47.388185  374880 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.key
	I0108 22:16:47.388370  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:16:47.388426  374880 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:16:47.388449  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:16:47.388490  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:16:47.388524  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:16:47.388562  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:16:47.388629  374880 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:16:47.389626  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:16:47.424129  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:16:47.455835  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:16:47.489732  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:16:47.523253  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:16:47.555019  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:16:47.587218  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:16:47.620629  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:16:47.654460  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:16:47.688945  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:16:47.722824  374880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:16:47.754016  374880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:16:47.773665  374880 ssh_runner.go:195] Run: openssl version
	I0108 22:16:47.779972  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:16:47.794327  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.801998  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.802101  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:16:47.808765  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:16:47.822088  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:16:47.836322  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.843412  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.843508  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:16:47.852467  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:16:47.871573  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:16:47.886132  374880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.892165  374880 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.892250  374880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:16:47.898728  374880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:16:47.911118  374880 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:16:47.918486  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 22:16:47.928188  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 22:16:47.936324  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 22:16:47.942939  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 22:16:47.952136  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 22:16:47.962062  374880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 22:16:47.969861  374880 kubeadm.go:404] StartCluster: {Name:old-k8s-version-079759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-079759 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:16:47.969986  374880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:16:47.970065  374880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:48.023933  374880 cri.go:89] found id: ""
	I0108 22:16:48.024025  374880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:16:48.040341  374880 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 22:16:48.040377  374880 kubeadm.go:636] restartCluster start
	I0108 22:16:48.040461  374880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 22:16:48.051709  374880 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:48.053467  374880 kubeconfig.go:92] found "old-k8s-version-079759" server: "https://192.168.39.183:8443"
	I0108 22:16:48.057824  374880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 22:16:48.071248  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:48.071367  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:48.086864  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:48.572297  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:48.572426  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:48.590996  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:49.072205  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:49.072316  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:49.085908  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:49.571496  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:49.571641  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:49.587609  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:46.683555  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:48.683848  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:47.463595  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.498282893s)
	I0108 22:16:47.463651  375556 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:16:47.494376  375556 kubeadm.go:787] kubelet initialised
	I0108 22:16:47.494409  375556 kubeadm.go:788] duration metric: took 30.746268ms waiting for restarted kubelet to initialise ...
	I0108 22:16:47.494419  375556 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:16:47.518711  375556 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace to be "Ready" ...
	I0108 22:16:49.532387  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:47.854322  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:50.347325  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:52.349479  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:50.071318  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:50.071492  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:50.087514  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:50.572137  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:50.572248  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:50.586581  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.072060  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:51.072182  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:51.087008  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.571464  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:51.571586  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:51.585684  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:52.072246  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:52.072323  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:52.087689  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:52.572243  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:52.572347  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:52.587037  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:53.071470  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:53.071589  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:53.086911  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:53.571460  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:53.571553  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:53.586045  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:54.072236  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:54.072358  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:54.087701  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:54.572312  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:54.572446  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:54.587922  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:51.181229  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:53.182527  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:52.026615  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:54.027979  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:54.849162  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:57.346988  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:55.071292  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:55.071441  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:55.090623  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:55.572144  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:55.572231  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:55.587405  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:56.071926  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:56.072056  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:56.086264  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:56.571790  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:56.571930  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:56.586088  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:57.071438  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:57.071546  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:57.087310  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:57.571491  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:57.571640  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:57.585754  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:58.071604  374880 api_server.go:166] Checking apiserver status ...
	I0108 22:16:58.071723  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 22:16:58.087027  374880 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 22:16:58.087070  374880 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 22:16:58.087086  374880 kubeadm.go:1135] stopping kube-system containers ...
	I0108 22:16:58.087128  374880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 22:16:58.087206  374880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:16:58.137792  374880 cri.go:89] found id: ""
	I0108 22:16:58.137875  374880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 22:16:58.157140  374880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:16:58.171953  374880 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:16:58.172029  374880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:58.186287  374880 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 22:16:58.186325  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:58.316514  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.124691  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.386136  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.490503  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:16:59.609542  374880 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:16:59.609648  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:16:55.684783  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:58.189882  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:56.527144  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:58.529935  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:01.030202  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:16:59.350073  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:01.845861  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:00.109804  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:00.610728  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.110191  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.609754  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:01.638919  374880 api_server.go:72] duration metric: took 2.029378055s to wait for apiserver process to appear ...
	I0108 22:17:01.638952  374880 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:17:01.638975  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:00.681951  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:02.683028  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:04.685040  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:03.527242  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:05.527888  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:03.850211  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:06.350594  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:06.639278  374880 api_server.go:269] stopped: https://192.168.39.183:8443/healthz: Get "https://192.168.39.183:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 22:17:06.639347  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.110234  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 22:17:08.110269  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 22:17:08.110287  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.268403  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.268437  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:08.268451  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.300726  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.300787  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:08.639135  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:08.676558  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:08.676598  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:09.139592  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:09.151081  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0108 22:17:09.151120  374880 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0108 22:17:09.639741  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:09.646812  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0108 22:17:09.656279  374880 api_server.go:141] control plane version: v1.16.0
	I0108 22:17:09.656318  374880 api_server.go:131] duration metric: took 8.017357804s to wait for apiserver health ...
	I0108 22:17:09.656333  374880 cni.go:84] Creating CNI manager for ""
	I0108 22:17:09.656342  374880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:17:09.658633  374880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:17:09.660081  374880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:17:09.670922  374880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:17:09.697148  374880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:17:09.710916  374880 system_pods.go:59] 7 kube-system pods found
	I0108 22:17:09.710958  374880 system_pods.go:61] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:09.710966  374880 system_pods.go:61] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:09.710974  374880 system_pods.go:61] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:09.710982  374880 system_pods.go:61] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Pending
	I0108 22:17:09.710988  374880 system_pods.go:61] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:09.710994  374880 system_pods.go:61] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:09.710999  374880 system_pods.go:61] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:09.711007  374880 system_pods.go:74] duration metric: took 13.819282ms to wait for pod list to return data ...
	I0108 22:17:09.711017  374880 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:17:09.717809  374880 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:17:09.717862  374880 node_conditions.go:123] node cpu capacity is 2
	I0108 22:17:09.717882  374880 node_conditions.go:105] duration metric: took 6.857808ms to run NodePressure ...
	I0108 22:17:09.717921  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 22:17:07.181980  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:09.182492  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:10.147851  374880 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 22:17:10.155593  374880 kubeadm.go:787] kubelet initialised
	I0108 22:17:10.155627  374880 kubeadm.go:788] duration metric: took 7.730921ms waiting for restarted kubelet to initialise ...
	I0108 22:17:10.155636  374880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:10.162330  374880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.173343  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.173384  374880 pod_ready.go:81] duration metric: took 11.015314ms waiting for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.173398  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.173408  374880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.181308  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "etcd-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.181354  374880 pod_ready.go:81] duration metric: took 7.925248ms waiting for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.181370  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "etcd-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.181382  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.201297  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.201340  374880 pod_ready.go:81] duration metric: took 19.943972ms waiting for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.201355  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.201364  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.212246  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.212303  374880 pod_ready.go:81] duration metric: took 10.921798ms waiting for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.212326  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.212337  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.554958  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-proxy-mfs65" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.554990  374880 pod_ready.go:81] duration metric: took 342.644311ms waiting for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.555000  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-proxy-mfs65" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.555014  374880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:10.952644  374880 pod_ready.go:97] node "old-k8s-version-079759" hosting pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.952690  374880 pod_ready.go:81] duration metric: took 397.663927ms waiting for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	E0108 22:17:10.952705  374880 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-079759" hosting pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:10.952721  374880 pod_ready.go:38] duration metric: took 797.073923ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:10.952756  374880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:17:10.966105  374880 ops.go:34] apiserver oom_adj: -16
	I0108 22:17:10.966142  374880 kubeadm.go:640] restartCluster took 22.925755113s
	I0108 22:17:10.966160  374880 kubeadm.go:406] StartCluster complete in 22.996305207s
	I0108 22:17:10.966183  374880 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:17:10.966269  374880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:17:10.968639  374880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:17:10.968991  374880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:17:10.969141  374880 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:17:10.969252  374880 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969268  374880 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969273  374880 config.go:182] Loaded profile config "old-k8s-version-079759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 22:17:10.969292  374880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-079759"
	I0108 22:17:10.969296  374880 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-079759"
	W0108 22:17:10.969314  374880 addons.go:246] addon metrics-server should already be in state true
	I0108 22:17:10.969351  374880 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-079759"
	I0108 22:17:10.969368  374880 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-079759"
	W0108 22:17:10.969375  374880 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:17:10.969393  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.969409  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.969785  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969823  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969832  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.969824  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.969916  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.969926  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.990948  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0108 22:17:10.991126  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0108 22:17:10.991782  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:10.991979  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:10.992429  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:10.992473  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:10.992593  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:10.992618  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:10.992993  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:10.993076  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:10.993348  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:10.993741  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.993822  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:10.997882  374880 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-079759"
	W0108 22:17:10.997908  374880 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:17:10.997937  374880 host.go:66] Checking if "old-k8s-version-079759" exists ...
	I0108 22:17:10.998375  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:10.998422  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.014704  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0108 22:17:11.015259  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.015412  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0108 22:17:11.016128  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.016160  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.016532  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.017165  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:11.017214  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.017521  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.018124  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.018140  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.018560  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.018854  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.018926  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0108 22:17:11.019671  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.020333  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.020353  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.020686  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.021353  374880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:17:11.021406  374880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:17:11.021696  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.024514  374880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:17:11.026172  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:17:11.026202  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:17:11.026238  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.031029  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.031951  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.031979  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.032327  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.032560  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.032709  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.032862  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.039130  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0108 22:17:11.039792  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.040408  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.040426  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.040821  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.041071  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.041764  374880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45497
	I0108 22:17:11.042444  374880 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:17:11.042927  374880 main.go:141] libmachine: Using API Version  1
	I0108 22:17:11.042952  374880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:17:11.043292  374880 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:17:11.043498  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetState
	I0108 22:17:11.043832  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.046099  374880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:17:07.529123  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:09.529950  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:11.048145  374880 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:17:11.048189  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:17:11.048231  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.045325  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .DriverName
	I0108 22:17:11.048952  374880 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:17:11.048976  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:17:11.049021  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHHostname
	I0108 22:17:11.052466  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.052852  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.052891  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.053248  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.053542  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.053781  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.053964  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.062218  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHPort
	I0108 22:17:11.062324  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.062338  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:02:7b", ip: ""} in network mk-old-k8s-version-079759: {Iface:virbr2 ExpiryTime:2024-01-08 23:16:28 +0000 UTC Type:0 Mac:52:54:00:79:02:7b Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:old-k8s-version-079759 Clientid:01:52:54:00:79:02:7b}
	I0108 22:17:11.062363  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | domain old-k8s-version-079759 has defined IP address 192.168.39.183 and MAC address 52:54:00:79:02:7b in network mk-old-k8s-version-079759
	I0108 22:17:11.063474  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHKeyPath
	I0108 22:17:11.063729  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .GetSSHUsername
	I0108 22:17:11.063926  374880 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/old-k8s-version-079759/id_rsa Username:docker}
	I0108 22:17:11.190657  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:17:11.190690  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:17:11.221757  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:17:11.254133  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:17:11.285976  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:17:11.286005  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:17:11.365594  374880 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:17:11.365632  374880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:17:11.406494  374880 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 22:17:11.459160  374880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:17:11.475488  374880 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-079759" context rescaled to 1 replicas
	I0108 22:17:11.475557  374880 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:17:11.478952  374880 out.go:177] * Verifying Kubernetes components...
	I0108 22:17:11.480674  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:17:12.238037  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016231756s)
	I0108 22:17:12.238158  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.238178  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.238585  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.238616  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.238630  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.238640  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.238649  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.238928  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.238953  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.292897  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.292926  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.293228  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.293249  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.297621  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.043443256s)
	I0108 22:17:12.297697  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.297717  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.298050  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.298107  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.298121  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.298136  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.298151  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.298377  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.298434  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.298449  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.460391  374880 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-079759" to be "Ready" ...
	I0108 22:17:12.460519  374880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.001301389s)
	I0108 22:17:12.460578  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.460600  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.460930  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.460950  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.460970  374880 main.go:141] libmachine: Making call to close driver server
	I0108 22:17:12.460980  374880 main.go:141] libmachine: (old-k8s-version-079759) Calling .Close
	I0108 22:17:12.461238  374880 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:17:12.461262  374880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:17:12.461278  374880 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-079759"
	I0108 22:17:12.461289  374880 main.go:141] libmachine: (old-k8s-version-079759) DBG | Closing plugin on server side
	I0108 22:17:12.464523  374880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0108 22:17:08.848369  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:11.349358  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:12.466030  374880 addons.go:508] enable addons completed in 1.496887794s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0108 22:17:14.465035  374880 node_ready.go:58] node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:11.186335  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:13.680427  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:12.029896  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:14.527011  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:13.847034  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:16.348875  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:16.465852  374880 node_ready.go:58] node "old-k8s-version-079759" has status "Ready":"False"
	I0108 22:17:18.965439  374880 node_ready.go:49] node "old-k8s-version-079759" has status "Ready":"True"
	I0108 22:17:18.965487  374880 node_ready.go:38] duration metric: took 6.505055778s waiting for node "old-k8s-version-079759" to be "Ready" ...
	I0108 22:17:18.965512  374880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:18.972414  374880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.981201  374880 pod_ready.go:92] pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.981242  374880 pod_ready.go:81] duration metric: took 8.788084ms waiting for pod "coredns-5644d7b6d9-fzlzx" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.981258  374880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.987118  374880 pod_ready.go:92] pod "etcd-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.987147  374880 pod_ready.go:81] duration metric: took 5.880499ms waiting for pod "etcd-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.987165  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.995928  374880 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:18.995972  374880 pod_ready.go:81] duration metric: took 8.795387ms waiting for pod "kube-apiserver-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:18.995990  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.006241  374880 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.006273  374880 pod_ready.go:81] duration metric: took 10.274527ms waiting for pod "kube-controller-manager-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.006288  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.366551  374880 pod_ready.go:92] pod "kube-proxy-mfs65" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.366588  374880 pod_ready.go:81] duration metric: took 360.29132ms waiting for pod "kube-proxy-mfs65" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.366607  374880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.766225  374880 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:19.766266  374880 pod_ready.go:81] duration metric: took 399.648483ms waiting for pod "kube-scheduler-old-k8s-version-079759" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:19.766287  374880 pod_ready.go:38] duration metric: took 800.758248ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:17:19.766317  374880 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:17:19.766407  374880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:17:19.790384  374880 api_server.go:72] duration metric: took 8.314784167s to wait for apiserver process to appear ...
	I0108 22:17:19.790417  374880 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:17:19.790442  374880 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0108 22:17:15.682742  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:18.181808  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:19.813424  374880 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0108 22:17:19.814615  374880 api_server.go:141] control plane version: v1.16.0
	I0108 22:17:19.814638  374880 api_server.go:131] duration metric: took 24.214441ms to wait for apiserver health ...
	I0108 22:17:19.814647  374880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:17:19.967792  374880 system_pods.go:59] 7 kube-system pods found
	I0108 22:17:19.967850  374880 system_pods.go:61] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:19.967858  374880 system_pods.go:61] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:19.967865  374880 system_pods.go:61] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:19.967871  374880 system_pods.go:61] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Running
	I0108 22:17:19.967875  374880 system_pods.go:61] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:19.967882  374880 system_pods.go:61] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:19.967896  374880 system_pods.go:61] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:19.967908  374880 system_pods.go:74] duration metric: took 153.252828ms to wait for pod list to return data ...
	I0108 22:17:19.967925  374880 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:17:20.166954  374880 default_sa.go:45] found service account: "default"
	I0108 22:17:20.166999  374880 default_sa.go:55] duration metric: took 199.059234ms for default service account to be created ...
	I0108 22:17:20.167013  374880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:17:20.367805  374880 system_pods.go:86] 7 kube-system pods found
	I0108 22:17:20.367843  374880 system_pods.go:89] "coredns-5644d7b6d9-fzlzx" [f48e2d6f-a573-463f-b96e-9f96b3161d66] Running
	I0108 22:17:20.367851  374880 system_pods.go:89] "etcd-old-k8s-version-079759" [702e5800-0aab-420a-b2e0-4224661f671e] Running
	I0108 22:17:20.367878  374880 system_pods.go:89] "kube-apiserver-old-k8s-version-079759" [b059b547-5d50-4d04-95c7-641f5f1dc4bc] Running
	I0108 22:17:20.367889  374880 system_pods.go:89] "kube-controller-manager-old-k8s-version-079759" [4707c04c-8879-407b-95df-21f989b7c02b] Running
	I0108 22:17:20.367895  374880 system_pods.go:89] "kube-proxy-mfs65" [73f37e50-5c82-4288-8cf8-cb1c576c7472] Running
	I0108 22:17:20.367901  374880 system_pods.go:89] "kube-scheduler-old-k8s-version-079759" [092736f1-d9b1-4f65-bf52-365fc2c68565] Running
	I0108 22:17:20.367908  374880 system_pods.go:89] "storage-provisioner" [3bd9c660-a79f-43a4-942c-2fc4f3c8ff32] Running
	I0108 22:17:20.367917  374880 system_pods.go:126] duration metric: took 200.897828ms to wait for k8s-apps to be running ...
	I0108 22:17:20.367931  374880 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:17:20.368002  374880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:17:20.384736  374880 system_svc.go:56] duration metric: took 16.789711ms WaitForService to wait for kubelet.
	I0108 22:17:20.384777  374880 kubeadm.go:581] duration metric: took 8.909185454s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:17:20.384805  374880 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:17:20.566662  374880 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:17:20.566699  374880 node_conditions.go:123] node cpu capacity is 2
	I0108 22:17:20.566713  374880 node_conditions.go:105] duration metric: took 181.900804ms to run NodePressure ...
	I0108 22:17:20.566733  374880 start.go:228] waiting for startup goroutines ...
	I0108 22:17:20.566743  374880 start.go:233] waiting for cluster config update ...
	I0108 22:17:20.566758  374880 start.go:242] writing updated cluster config ...
	I0108 22:17:20.567148  374880 ssh_runner.go:195] Run: rm -f paused
	I0108 22:17:20.625096  374880 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0108 22:17:20.627497  374880 out.go:177] 
	W0108 22:17:20.629694  374880 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0108 22:17:20.631265  374880 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0108 22:17:20.632916  374880 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-079759" cluster and "default" namespace by default
	I0108 22:17:16.529078  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:19.030929  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:18.848535  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:20.848603  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:20.182275  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:22.183490  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:24.682561  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:21.528256  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:23.529114  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:26.027560  375556 pod_ready.go:102] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:23.346430  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:25.348995  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.182420  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:29.183480  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.530319  375556 pod_ready.go:92] pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.530347  375556 pod_ready.go:81] duration metric: took 40.011595743s waiting for pod "coredns-5dd5756b68-vcmh6" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.530357  375556 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.537548  375556 pod_ready.go:92] pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.537577  375556 pod_ready.go:81] duration metric: took 7.212322ms waiting for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.537588  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.549788  375556 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.549830  375556 pod_ready.go:81] duration metric: took 12.233749ms waiting for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.549845  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.558337  375556 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.558364  375556 pod_ready.go:81] duration metric: took 8.510648ms waiting for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.558375  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f4xsp" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.568980  375556 pod_ready.go:92] pod "kube-proxy-f4xsp" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.569008  375556 pod_ready.go:81] duration metric: took 10.626925ms waiting for pod "kube-proxy-f4xsp" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.569018  375556 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.924746  375556 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:17:27.924792  375556 pod_ready.go:81] duration metric: took 355.765575ms waiting for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:27.924810  375556 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" ...
	I0108 22:17:29.934031  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:27.846645  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:29.848666  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:32.347317  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:31.681795  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.183509  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:31.935866  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.434680  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:34.850409  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:37.348417  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:36.681720  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:39.187220  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:36.933398  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:38.937527  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:39.849140  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:42.348407  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:41.681963  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:44.183281  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:41.434499  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:43.438745  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:45.934532  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:44.846802  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:46.847285  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:46.683139  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:49.180610  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:47.942228  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:50.434779  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:49.346290  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:51.346592  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:51.181365  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:53.182147  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:52.435305  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:54.933017  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:53.347169  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:55.847921  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:55.680794  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:57.683942  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:59.684807  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:56.933676  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:59.433266  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:17:58.346863  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:00.351598  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:02.358340  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:02.183383  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:04.684356  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:01.438892  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:03.942882  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:04.845380  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:06.850561  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:07.182060  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:09.182524  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:06.433230  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:08.435570  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:10.933834  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:08.853139  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:11.345311  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:11.183083  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.185196  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.435974  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.934920  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:13.347243  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.350752  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:15.683154  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:18.183396  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:17.938857  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.434388  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:17.849663  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.349073  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.349854  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:20.183740  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.681755  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:22.938829  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:24.940050  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:24.845935  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:26.848602  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:25.182926  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:27.681471  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:27.433983  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:29.933179  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:29.348482  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:31.848768  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:30.182593  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:32.184633  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:34.684351  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:31.935920  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:34.432407  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:33.849853  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:36.347248  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:37.185296  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:39.683266  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:36.434742  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:38.935788  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:38.347422  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:40.847846  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:42.184271  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:44.191899  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:41.434194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:43.435816  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:45.436582  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:43.348144  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:45.850291  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:46.681976  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:48.684379  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:47.934501  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:50.432989  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:48.346408  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:50.348943  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:51.181865  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:53.182990  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:52.433070  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:54.442432  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:52.846607  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:54.850642  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:57.347230  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:55.681392  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:57.683410  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:56.932551  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:58.935585  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:18:59.348127  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:01.848981  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:00.183662  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:02.681392  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:04.683283  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:01.433125  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:03.433714  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:05.434985  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:03.849460  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:06.349541  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:07.182372  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:09.681196  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:07.935969  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:10.435837  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:08.847292  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:10.850261  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:11.681770  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:13.683390  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:12.439563  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:14.933378  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:13.347217  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:15.847524  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:16.181226  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:18.182271  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:16.936400  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:19.433956  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:18.347048  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:20.846947  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:20.182396  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:22.681453  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:24.682678  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:21.934747  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:23.935826  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:22.847819  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:24.847981  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:27.346372  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:27.181829  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:29.686277  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:26.433266  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:28.433601  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:30.435331  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:29.349171  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:31.848107  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:31.686784  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.181838  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:32.932383  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.933487  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:34.349446  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:36.845807  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:36.182711  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:38.183592  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:37.433841  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:39.440368  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:38.847000  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:40.849528  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:40.681394  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:42.681803  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:41.934279  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:44.433480  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:43.346283  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:45.849805  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:45.182604  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:47.183086  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:49.681891  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:46.934165  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:49.433592  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:48.346422  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:50.346711  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:52.347386  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:52.181241  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:54.184167  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:51.435757  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:53.932937  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:55.935076  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:54.847306  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:56.849761  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:56.681736  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:59.182156  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:58.433892  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:00.435066  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:19:59.348176  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:01.847094  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:01.682869  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.183165  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:02.934032  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.935393  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:04.347516  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:06.846388  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:06.681333  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:08.684291  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:07.436354  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:09.934776  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:08.849876  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.346794  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.184760  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.681471  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:11.935382  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.935718  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:13.347573  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:15.846434  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:15.684425  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:18.182489  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:16.435556  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:18.934238  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:17.847804  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:19.851620  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:22.347305  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:20.183538  375205 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:21.174145  375205 pod_ready.go:81] duration metric: took 4m0.001134505s waiting for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" ...
	E0108 22:20:21.174196  375205 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-pk8bm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:20:21.174225  375205 pod_ready.go:38] duration metric: took 4m11.09670924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:20:21.174739  375205 kubeadm.go:640] restartCluster took 4m32.919154523s
	W0108 22:20:21.174932  375205 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:20:21.175031  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:20:21.437480  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:23.437985  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:25.934631  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:24.847918  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:27.354150  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:28.434309  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:30.935564  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:29.845550  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:31.847597  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:32.338942  375293 pod_ready.go:81] duration metric: took 4m0.001163118s waiting for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" ...
	E0108 22:20:32.338972  375293 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-jswgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:20:32.338994  375293 pod_ready.go:38] duration metric: took 4m8.522193777s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:20:32.339022  375293 kubeadm.go:640] restartCluster took 4m31.730992352s
	W0108 22:20:32.339087  375293 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:20:32.339116  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:20:32.935958  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:35.434816  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:36.302806  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.127706719s)
	I0108 22:20:36.302938  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:20:36.321621  375205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:20:36.334281  375205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:20:36.346671  375205 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:20:36.346717  375205 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:20:36.614321  375205 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:20:37.936328  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:40.435692  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:42.933586  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:45.434194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:48.562754  375205 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0108 22:20:48.562854  375205 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:20:48.562933  375205 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:20:48.563069  375205 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:20:48.563228  375205 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:20:48.563339  375205 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:20:48.565241  375205 out.go:204]   - Generating certificates and keys ...
	I0108 22:20:48.565369  375205 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:20:48.565449  375205 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:20:48.565542  375205 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:20:48.565610  375205 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:20:48.565733  375205 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:20:48.565840  375205 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:20:48.565938  375205 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:20:48.566036  375205 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:20:48.566148  375205 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:20:48.566255  375205 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:20:48.566336  375205 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:20:48.566437  375205 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:20:48.566521  375205 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:20:48.566606  375205 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0108 22:20:48.566682  375205 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:20:48.566771  375205 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:20:48.566859  375205 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:20:48.566957  375205 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:20:48.567046  375205 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:20:48.569013  375205 out.go:204]   - Booting up control plane ...
	I0108 22:20:48.569130  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:20:48.569247  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:20:48.569353  375205 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:20:48.569468  375205 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:20:48.569588  375205 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:20:48.569656  375205 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:20:48.569873  375205 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:20:48.569977  375205 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002900 seconds
	I0108 22:20:48.570115  375205 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:20:48.570289  375205 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:20:48.570372  375205 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:20:48.570558  375205 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-675668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:20:48.570648  375205 kubeadm.go:322] [bootstrap-token] Using token: t5purj.kqjcf0swk5rb5mxk
	I0108 22:20:48.572249  375205 out.go:204]   - Configuring RBAC rules ...
	I0108 22:20:48.572407  375205 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:20:48.572525  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:20:48.572698  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:20:48.572845  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:20:48.572985  375205 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:20:48.573060  375205 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:20:48.573192  375205 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:20:48.573253  375205 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:20:48.573309  375205 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:20:48.573316  375205 kubeadm.go:322] 
	I0108 22:20:48.573365  375205 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:20:48.573372  375205 kubeadm.go:322] 
	I0108 22:20:48.573433  375205 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:20:48.573440  375205 kubeadm.go:322] 
	I0108 22:20:48.573466  375205 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:20:48.573516  375205 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:20:48.573559  375205 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:20:48.573565  375205 kubeadm.go:322] 
	I0108 22:20:48.573608  375205 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:20:48.573614  375205 kubeadm.go:322] 
	I0108 22:20:48.573656  375205 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:20:48.573663  375205 kubeadm.go:322] 
	I0108 22:20:48.573705  375205 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:20:48.573774  375205 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:20:48.573830  375205 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:20:48.573836  375205 kubeadm.go:322] 
	I0108 22:20:48.573902  375205 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:20:48.573968  375205 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:20:48.573974  375205 kubeadm.go:322] 
	I0108 22:20:48.574041  375205 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t5purj.kqjcf0swk5rb5mxk \
	I0108 22:20:48.574137  375205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:20:48.574168  375205 kubeadm.go:322] 	--control-plane 
	I0108 22:20:48.574179  375205 kubeadm.go:322] 
	I0108 22:20:48.574277  375205 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:20:48.574288  375205 kubeadm.go:322] 
	I0108 22:20:48.574369  375205 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t5purj.kqjcf0swk5rb5mxk \
	I0108 22:20:48.574510  375205 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:20:48.574532  375205 cni.go:84] Creating CNI manager for ""
	I0108 22:20:48.574545  375205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:20:48.576776  375205 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:20:48.578238  375205 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:20:48.605767  375205 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:20:48.656602  375205 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:20:48.656700  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=no-preload-675668 minikube.k8s.io/updated_at=2024_01_08T22_20_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:48.656701  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:48.954525  375205 ops.go:34] apiserver oom_adj: -16
	I0108 22:20:48.954705  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:49.454907  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.014263  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (17.675119667s)
	I0108 22:20:50.014357  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:20:50.032616  375293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:20:50.046779  375293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:20:50.059243  375293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:20:50.059321  375293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:20:50.125341  375293 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:20:50.125427  375293 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:20:50.314274  375293 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:20:50.314692  375293 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:20:50.314859  375293 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:20:50.613241  375293 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:20:47.934671  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:50.435675  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:50.615123  375293 out.go:204]   - Generating certificates and keys ...
	I0108 22:20:50.615298  375293 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:20:50.615442  375293 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:20:50.615588  375293 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:20:50.615684  375293 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:20:50.615978  375293 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:20:50.616644  375293 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:20:50.617070  375293 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:20:50.617625  375293 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:20:50.618175  375293 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:20:50.618746  375293 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:20:50.619222  375293 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:20:50.619315  375293 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:20:50.750595  375293 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:20:50.925827  375293 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:20:51.210091  375293 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:20:51.341979  375293 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:20:51.342383  375293 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:20:51.346252  375293 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:20:51.348515  375293 out.go:204]   - Booting up control plane ...
	I0108 22:20:51.348656  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:20:51.349029  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:20:51.350374  375293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:20:51.368778  375293 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:20:51.370050  375293 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:20:51.370127  375293 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:20:51.533956  375293 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:20:49.955240  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.455461  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:50.954656  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:51.455494  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:51.954708  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.454966  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.955643  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:53.454696  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:53.955234  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:54.455436  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:52.934792  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:55.433713  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:54.955090  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:55.454594  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:55.954634  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:56.455479  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:56.954866  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.455465  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.954857  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:58.454611  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:58.955416  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:59.455690  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:20:57.434365  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:20:59.932616  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:01.038928  375293 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503619 seconds
	I0108 22:21:01.039086  375293 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:21:01.066204  375293 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:21:01.633859  375293 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:21:01.634073  375293 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-903819 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:21:02.161422  375293 kubeadm.go:322] [bootstrap-token] Using token: m5gf05.lf63ehk148mqhzsy
	I0108 22:20:59.954870  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:00.455632  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:00.954611  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:01.455512  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:01.955058  375205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.130771  375205 kubeadm.go:1088] duration metric: took 13.474145806s to wait for elevateKubeSystemPrivileges.
	I0108 22:21:02.130812  375205 kubeadm.go:406] StartCluster complete in 5m13.930335887s
	I0108 22:21:02.130872  375205 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:02.131052  375205 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:21:02.133316  375205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:02.133620  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:21:02.133769  375205 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:21:02.133851  375205 addons.go:69] Setting storage-provisioner=true in profile "no-preload-675668"
	I0108 22:21:02.133874  375205 addons.go:237] Setting addon storage-provisioner=true in "no-preload-675668"
	W0108 22:21:02.133885  375205 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:21:02.133902  375205 addons.go:69] Setting default-storageclass=true in profile "no-preload-675668"
	I0108 22:21:02.133931  375205 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-675668"
	I0108 22:21:02.133944  375205 addons.go:69] Setting metrics-server=true in profile "no-preload-675668"
	I0108 22:21:02.133960  375205 addons.go:237] Setting addon metrics-server=true in "no-preload-675668"
	W0108 22:21:02.133970  375205 addons.go:246] addon metrics-server should already be in state true
	I0108 22:21:02.134007  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.133934  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.134493  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134492  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134531  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.133882  375205 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:21:02.134595  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.134626  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.134679  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.159537  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0108 22:21:02.159560  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0108 22:21:02.159658  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0108 22:21:02.160218  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160310  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160353  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.160816  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160832  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.160837  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160856  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.160923  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.160934  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.161384  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161384  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161436  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.161578  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.162110  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.162156  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.163070  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.163111  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.166373  375205 addons.go:237] Setting addon default-storageclass=true in "no-preload-675668"
	W0108 22:21:02.166398  375205 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:21:02.166437  375205 host.go:66] Checking if "no-preload-675668" exists ...
	I0108 22:21:02.166793  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.166851  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.186248  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0108 22:21:02.186805  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.187689  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.187721  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.189657  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.189934  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0108 22:21:02.190139  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.190885  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.192512  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.192561  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.192883  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.193058  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.193793  375205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:02.193846  375205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:02.194831  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45769
	I0108 22:21:02.197130  375205 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:21:02.195453  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.198890  375205 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:02.198908  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:21:02.198928  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.199474  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.199496  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.202159  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.202458  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.204081  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.204440  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.204470  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.204907  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.205095  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.206369  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.206382  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.208865  375205 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:21:02.207548  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.210754  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:21:02.210777  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:21:02.210806  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.215494  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.216525  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.216572  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.217020  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.217270  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.217433  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.217548  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.218155  375205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0108 22:21:02.219031  375205 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:02.219589  375205 main.go:141] libmachine: Using API Version  1
	I0108 22:21:02.219613  375205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:02.220024  375205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:02.220222  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetState
	I0108 22:21:02.223150  375205 main.go:141] libmachine: (no-preload-675668) Calling .DriverName
	I0108 22:21:02.223618  375205 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:02.223638  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:21:02.223662  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHHostname
	I0108 22:21:02.227537  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.228321  375205 main.go:141] libmachine: (no-preload-675668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:3b:59", ip: ""} in network mk-no-preload-675668: {Iface:virbr3 ExpiryTime:2024-01-08 23:15:20 +0000 UTC Type:0 Mac:52:54:00:08:3b:59 Iaid: IPaddr:192.168.61.153 Prefix:24 Hostname:no-preload-675668 Clientid:01:52:54:00:08:3b:59}
	I0108 22:21:02.228364  375205 main.go:141] libmachine: (no-preload-675668) DBG | domain no-preload-675668 has defined IP address 192.168.61.153 and MAC address 52:54:00:08:3b:59 in network mk-no-preload-675668
	I0108 22:21:02.228729  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHPort
	I0108 22:21:02.228986  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHKeyPath
	I0108 22:21:02.229244  375205 main.go:141] libmachine: (no-preload-675668) Calling .GetSSHUsername
	I0108 22:21:02.229385  375205 sshutil.go:53] new ssh client: &{IP:192.168.61.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/no-preload-675668/id_rsa Username:docker}
	I0108 22:21:02.376102  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:02.442186  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:21:02.442220  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:21:02.463490  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:02.511966  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:21:02.512007  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:21:02.516771  375205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:21:02.645916  375205 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:02.645958  375205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:21:02.693299  375205 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-675668" context rescaled to 1 replicas
	I0108 22:21:02.693524  375205 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.153 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:21:02.696133  375205 out.go:177] * Verifying Kubernetes components...
	I0108 22:21:02.163532  375293 out.go:204]   - Configuring RBAC rules ...
	I0108 22:21:02.163667  375293 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:21:02.202175  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:21:02.230273  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:21:02.239237  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:21:02.245892  375293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:21:02.262139  375293 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:21:02.282319  375293 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:21:02.634155  375293 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:21:02.712856  375293 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:21:02.712895  375293 kubeadm.go:322] 
	I0108 22:21:02.713004  375293 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:21:02.713029  375293 kubeadm.go:322] 
	I0108 22:21:02.713122  375293 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:21:02.713138  375293 kubeadm.go:322] 
	I0108 22:21:02.713175  375293 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:21:02.713243  375293 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:21:02.713342  375293 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:21:02.713367  375293 kubeadm.go:322] 
	I0108 22:21:02.713461  375293 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:21:02.713491  375293 kubeadm.go:322] 
	I0108 22:21:02.713571  375293 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:21:02.713582  375293 kubeadm.go:322] 
	I0108 22:21:02.713672  375293 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:21:02.713775  375293 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:21:02.713903  375293 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:21:02.713916  375293 kubeadm.go:322] 
	I0108 22:21:02.714019  375293 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:21:02.714118  375293 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:21:02.714132  375293 kubeadm.go:322] 
	I0108 22:21:02.714275  375293 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m5gf05.lf63ehk148mqhzsy \
	I0108 22:21:02.714404  375293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:21:02.714427  375293 kubeadm.go:322] 	--control-plane 
	I0108 22:21:02.714439  375293 kubeadm.go:322] 
	I0108 22:21:02.714524  375293 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:21:02.714533  375293 kubeadm.go:322] 
	I0108 22:21:02.714623  375293 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m5gf05.lf63ehk148mqhzsy \
	I0108 22:21:02.714748  375293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:21:02.715538  375293 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:21:02.715812  375293 cni.go:84] Creating CNI manager for ""
	I0108 22:21:02.715830  375293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:21:02.717948  375293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:21:02.719376  375293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:21:02.757728  375293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:21:02.792630  375293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:21:02.792734  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.792736  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=embed-certs-903819 minikube.k8s.io/updated_at=2024_01_08T22_21_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:02.697938  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:02.989011  375205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:03.814186  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.437994456s)
	I0108 22:21:03.814254  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814255  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.350714909s)
	I0108 22:21:03.814286  375205 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.297474579s)
	I0108 22:21:03.814302  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814321  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814317  375205 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0108 22:21:03.814318  375205 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.116341471s)
	I0108 22:21:03.814267  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814391  375205 node_ready.go:35] waiting up to 6m0s for node "no-preload-675668" to be "Ready" ...
	I0108 22:21:03.814667  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.814692  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.814734  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.814742  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.814765  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814789  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814821  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.814855  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.814868  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.814878  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.814994  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.815008  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.816606  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.816639  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.816649  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.844508  375205 node_ready.go:49] node "no-preload-675668" has status "Ready":"True"
	I0108 22:21:03.844562  375205 node_ready.go:38] duration metric: took 30.150881ms waiting for node "no-preload-675668" to be "Ready" ...
	I0108 22:21:03.844582  375205 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:03.895674  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:03.895707  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:03.896169  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:03.896196  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:03.896243  375205 main.go:141] libmachine: (no-preload-675668) DBG | Closing plugin on server side
	I0108 22:21:03.916148  375205 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-q6x86" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:04.208779  375205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.219716131s)
	I0108 22:21:04.208834  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:04.208853  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:04.209240  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:04.209262  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:04.209275  375205 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:04.209289  375205 main.go:141] libmachine: (no-preload-675668) Calling .Close
	I0108 22:21:04.209564  375205 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:04.209585  375205 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:04.209599  375205 addons.go:473] Verifying addon metrics-server=true in "no-preload-675668"
	I0108 22:21:04.211402  375205 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 22:21:04.212659  375205 addons.go:508] enable addons completed in 2.078891102s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 22:21:01.934579  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:03.936076  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:05.936317  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:03.317224  375293 ops.go:34] apiserver oom_adj: -16
	I0108 22:21:03.317384  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:03.817786  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:04.318579  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:04.817664  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.317487  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.818475  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:06.318507  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:06.818090  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:07.318335  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:05.932344  375205 pod_ready.go:92] pod "coredns-76f75df574-q6x86" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.932389  375205 pod_ready.go:81] duration metric: took 2.016206796s waiting for pod "coredns-76f75df574-q6x86" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.932404  375205 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.941282  375205 pod_ready.go:92] pod "etcd-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.941316  375205 pod_ready.go:81] duration metric: took 8.903771ms waiting for pod "etcd-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.941331  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.950226  375205 pod_ready.go:92] pod "kube-apiserver-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.950258  375205 pod_ready.go:81] duration metric: took 8.918375ms waiting for pod "kube-apiserver-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.950273  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.972742  375205 pod_ready.go:92] pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:05.972794  375205 pod_ready.go:81] duration metric: took 22.511438ms waiting for pod "kube-controller-manager-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:05.972816  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b2nx2" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:06.981190  375205 pod_ready.go:92] pod "kube-proxy-b2nx2" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:06.981214  375205 pod_ready.go:81] duration metric: took 1.008391493s waiting for pod "kube-proxy-b2nx2" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:06.981225  375205 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:07.121313  375205 pod_ready.go:92] pod "kube-scheduler-no-preload-675668" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:07.121348  375205 pod_ready.go:81] duration metric: took 140.114425ms waiting for pod "kube-scheduler-no-preload-675668" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:07.121363  375205 pod_ready.go:38] duration metric: took 3.276764424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:07.121385  375205 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:21:07.121458  375205 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:21:07.138313  375205 api_server.go:72] duration metric: took 4.444721115s to wait for apiserver process to appear ...
	I0108 22:21:07.138352  375205 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:21:07.138384  375205 api_server.go:253] Checking apiserver healthz at https://192.168.61.153:8443/healthz ...
	I0108 22:21:07.145653  375205 api_server.go:279] https://192.168.61.153:8443/healthz returned 200:
	ok
	I0108 22:21:07.148112  375205 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:21:07.148146  375205 api_server.go:131] duration metric: took 9.785033ms to wait for apiserver health ...
	I0108 22:21:07.148158  375205 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:21:07.325218  375205 system_pods.go:59] 8 kube-system pods found
	I0108 22:21:07.325263  375205 system_pods.go:61] "coredns-76f75df574-q6x86" [6cad2e0f-a7af-453d-9eaf-55b56e41e27b] Running
	I0108 22:21:07.325268  375205 system_pods.go:61] "etcd-no-preload-675668" [cd434699-162a-4b04-853d-94dbb1254279] Running
	I0108 22:21:07.325273  375205 system_pods.go:61] "kube-apiserver-no-preload-675668" [d22859b8-f451-40b8-85d7-7f3d548b1af1] Running
	I0108 22:21:07.325279  375205 system_pods.go:61] "kube-controller-manager-no-preload-675668" [8b52fdfe-124a-4d08-b66b-41f1b051fe95] Running
	I0108 22:21:07.325283  375205 system_pods.go:61] "kube-proxy-b2nx2" [b6106f11-9345-4915-b7cc-d2671a7c4e72] Running
	I0108 22:21:07.325287  375205 system_pods.go:61] "kube-scheduler-no-preload-675668" [83562817-27bf-4265-88f0-3dad667687c5] Running
	I0108 22:21:07.325296  375205 system_pods.go:61] "metrics-server-57f55c9bc5-vb2kj" [45489720-2506-46fa-8833-02cbae6f122b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:21:07.325305  375205 system_pods.go:61] "storage-provisioner" [a1c64608-a169-455b-a5e9-0ecb4161432c] Running
	I0108 22:21:07.325323  375205 system_pods.go:74] duration metric: took 177.156331ms to wait for pod list to return data ...
	I0108 22:21:07.325337  375205 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:21:07.521751  375205 default_sa.go:45] found service account: "default"
	I0108 22:21:07.521796  375205 default_sa.go:55] duration metric: took 196.444982ms for default service account to be created ...
	I0108 22:21:07.521809  375205 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:21:07.725848  375205 system_pods.go:86] 8 kube-system pods found
	I0108 22:21:07.725888  375205 system_pods.go:89] "coredns-76f75df574-q6x86" [6cad2e0f-a7af-453d-9eaf-55b56e41e27b] Running
	I0108 22:21:07.725894  375205 system_pods.go:89] "etcd-no-preload-675668" [cd434699-162a-4b04-853d-94dbb1254279] Running
	I0108 22:21:07.725899  375205 system_pods.go:89] "kube-apiserver-no-preload-675668" [d22859b8-f451-40b8-85d7-7f3d548b1af1] Running
	I0108 22:21:07.725904  375205 system_pods.go:89] "kube-controller-manager-no-preload-675668" [8b52fdfe-124a-4d08-b66b-41f1b051fe95] Running
	I0108 22:21:07.725908  375205 system_pods.go:89] "kube-proxy-b2nx2" [b6106f11-9345-4915-b7cc-d2671a7c4e72] Running
	I0108 22:21:07.725913  375205 system_pods.go:89] "kube-scheduler-no-preload-675668" [83562817-27bf-4265-88f0-3dad667687c5] Running
	I0108 22:21:07.725920  375205 system_pods.go:89] "metrics-server-57f55c9bc5-vb2kj" [45489720-2506-46fa-8833-02cbae6f122b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:21:07.725926  375205 system_pods.go:89] "storage-provisioner" [a1c64608-a169-455b-a5e9-0ecb4161432c] Running
	I0108 22:21:07.725937  375205 system_pods.go:126] duration metric: took 204.121913ms to wait for k8s-apps to be running ...
	I0108 22:21:07.725946  375205 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:21:07.726014  375205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:07.745719  375205 system_svc.go:56] duration metric: took 19.7558ms WaitForService to wait for kubelet.
	I0108 22:21:07.745762  375205 kubeadm.go:581] duration metric: took 5.052181219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:21:07.745787  375205 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:21:07.923051  375205 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:21:07.923108  375205 node_conditions.go:123] node cpu capacity is 2
	I0108 22:21:07.923124  375205 node_conditions.go:105] duration metric: took 177.330669ms to run NodePressure ...
	I0108 22:21:07.923140  375205 start.go:228] waiting for startup goroutines ...
	I0108 22:21:07.923150  375205 start.go:233] waiting for cluster config update ...
	I0108 22:21:07.923164  375205 start.go:242] writing updated cluster config ...
	I0108 22:21:07.923585  375205 ssh_runner.go:195] Run: rm -f paused
	I0108 22:21:07.985436  375205 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0108 22:21:07.987522  375205 out.go:177] * Done! kubectl is now configured to use "no-preload-675668" cluster and "default" namespace by default
	I0108 22:21:07.936490  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:10.434333  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:07.817734  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:08.318472  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:08.818320  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:09.317791  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:09.818298  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:10.317739  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:10.818233  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:11.317545  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:11.818344  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:12.317620  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:12.817911  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:13.317976  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:13.817670  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:14.317747  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:14.817596  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:15.318339  375293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:15.465438  375293 kubeadm.go:1088] duration metric: took 12.672788245s to wait for elevateKubeSystemPrivileges.
	I0108 22:21:15.465476  375293 kubeadm.go:406] StartCluster complete in 5m14.917822837s
	I0108 22:21:15.465503  375293 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:15.465612  375293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:21:15.468437  375293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:21:15.468772  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:21:15.468921  375293 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:21:15.469008  375293 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-903819"
	I0108 22:21:15.469017  375293 addons.go:69] Setting default-storageclass=true in profile "embed-certs-903819"
	I0108 22:21:15.469036  375293 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-903819"
	I0108 22:21:15.469052  375293 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 22:21:15.469064  375293 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:21:15.469060  375293 addons.go:69] Setting metrics-server=true in profile "embed-certs-903819"
	I0108 22:21:15.469037  375293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-903819"
	I0108 22:21:15.469111  375293 addons.go:237] Setting addon metrics-server=true in "embed-certs-903819"
	W0108 22:21:15.469128  375293 addons.go:246] addon metrics-server should already be in state true
	I0108 22:21:15.469139  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.469189  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.469584  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469635  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469676  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.469647  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.469585  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.469825  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.488818  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0108 22:21:15.489266  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.491196  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39101
	I0108 22:21:15.491253  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0108 22:21:15.491759  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.491787  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.491816  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.492193  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.492365  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.492383  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.492747  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.492790  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.493002  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.493056  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.493670  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.493702  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.494305  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.494329  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.494841  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.495072  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.499830  375293 addons.go:237] Setting addon default-storageclass=true in "embed-certs-903819"
	W0108 22:21:15.499867  375293 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:21:15.499903  375293 host.go:66] Checking if "embed-certs-903819" exists ...
	I0108 22:21:15.500396  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.500568  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.516135  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0108 22:21:15.516748  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.517517  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.517566  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.518117  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.518378  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.519282  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0108 22:21:15.520505  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.520596  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.522491  375293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:21:15.521662  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.524042  375293 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:15.524051  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.524059  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:21:15.524081  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.524560  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.524774  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.527237  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.529443  375293 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:21:15.528147  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.528787  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.531192  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:21:15.531217  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:21:15.531249  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.531217  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.531343  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.531599  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.531825  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.532078  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.535903  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0108 22:21:15.536161  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.536527  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.536553  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.536618  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.536766  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.536994  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.537194  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.537359  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.537370  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.537426  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.537948  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.538486  375293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:21:15.538508  375293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:21:15.557562  375293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0108 22:21:15.558072  375293 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:21:15.558613  375293 main.go:141] libmachine: Using API Version  1
	I0108 22:21:15.558643  375293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:21:15.559096  375293 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:21:15.559318  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetState
	I0108 22:21:15.561435  375293 main.go:141] libmachine: (embed-certs-903819) Calling .DriverName
	I0108 22:21:15.561769  375293 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:15.561788  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:21:15.561809  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHHostname
	I0108 22:21:15.564959  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.565410  375293 main.go:141] libmachine: (embed-certs-903819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:74:da", ip: ""} in network mk-embed-certs-903819: {Iface:virbr1 ExpiryTime:2024-01-08 23:15:41 +0000 UTC Type:0 Mac:52:54:00:73:74:da Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:embed-certs-903819 Clientid:01:52:54:00:73:74:da}
	I0108 22:21:15.565442  375293 main.go:141] libmachine: (embed-certs-903819) DBG | domain embed-certs-903819 has defined IP address 192.168.72.132 and MAC address 52:54:00:73:74:da in network mk-embed-certs-903819
	I0108 22:21:15.565628  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHPort
	I0108 22:21:15.565836  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHKeyPath
	I0108 22:21:15.565994  375293 main.go:141] libmachine: (embed-certs-903819) Calling .GetSSHUsername
	I0108 22:21:15.566145  375293 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/embed-certs-903819/id_rsa Username:docker}
	I0108 22:21:15.740070  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:21:15.740112  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:21:15.762954  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:21:15.779320  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:21:15.819423  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:21:15.821997  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:21:15.822039  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:21:15.911195  375293 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:15.911231  375293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:21:16.022419  375293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:21:16.061550  375293 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-903819" context rescaled to 1 replicas
	I0108 22:21:16.061625  375293 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:21:16.063813  375293 out.go:177] * Verifying Kubernetes components...
	I0108 22:21:12.435066  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:14.936374  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:16.065433  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:17.600634  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.837630321s)
	I0108 22:21:17.600727  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.600751  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.601111  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.601133  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:17.601145  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.601155  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.601162  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.601437  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.601478  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.601496  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:17.658136  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:17.658160  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:17.658512  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:17.658539  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:17.658556  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.633155  375293 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.813676374s)
	I0108 22:21:18.633329  375293 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0108 22:21:18.633460  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.610999344s)
	I0108 22:21:18.633535  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.633576  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.633728  375293 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.568262314s)
	I0108 22:21:18.633793  375293 node_ready.go:35] waiting up to 6m0s for node "embed-certs-903819" to be "Ready" ...
	I0108 22:21:18.634123  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.634212  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.634247  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.634274  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.634293  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.634767  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.634836  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.634875  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.634901  375293 addons.go:473] Verifying addon metrics-server=true in "embed-certs-903819"
	I0108 22:21:18.638741  375293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.85936832s)
	I0108 22:21:18.638810  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.638826  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.639227  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.639301  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.639322  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.639333  375293 main.go:141] libmachine: Making call to close driver server
	I0108 22:21:18.639353  375293 main.go:141] libmachine: (embed-certs-903819) Calling .Close
	I0108 22:21:18.639611  375293 main.go:141] libmachine: (embed-certs-903819) DBG | Closing plugin on server side
	I0108 22:21:18.639643  375293 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:21:18.639652  375293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:21:18.641291  375293 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0108 22:21:17.433629  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:19.436354  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:18.642785  375293 addons.go:508] enable addons completed in 3.173862498s: enabled=[default-storageclass metrics-server storage-provisioner]
	I0108 22:21:18.710469  375293 node_ready.go:49] node "embed-certs-903819" has status "Ready":"True"
	I0108 22:21:18.710510  375293 node_ready.go:38] duration metric: took 76.686364ms waiting for node "embed-certs-903819" to be "Ready" ...
	I0108 22:21:18.710526  375293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:18.737405  375293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.747084  375293 pod_ready.go:92] pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.747120  375293 pod_ready.go:81] duration metric: took 1.009672279s waiting for pod "coredns-5dd5756b68-jbz6n" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.747136  375293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.758191  375293 pod_ready.go:92] pod "etcd-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.758217  375293 pod_ready.go:81] duration metric: took 11.073973ms waiting for pod "etcd-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.758227  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.770167  375293 pod_ready.go:92] pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.770199  375293 pod_ready.go:81] duration metric: took 11.962809ms waiting for pod "kube-apiserver-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.770213  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.778549  375293 pod_ready.go:92] pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:19.778576  375293 pod_ready.go:81] duration metric: took 8.355574ms waiting for pod "kube-controller-manager-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:19.778593  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqj9b" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.291841  375293 pod_ready.go:92] pod "kube-proxy-hqj9b" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:20.291889  375293 pod_ready.go:81] duration metric: took 513.287335ms waiting for pod "kube-proxy-hqj9b" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.291907  375293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.639437  375293 pod_ready.go:92] pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace has status "Ready":"True"
	I0108 22:21:20.639482  375293 pod_ready.go:81] duration metric: took 347.563689ms waiting for pod "kube-scheduler-embed-certs-903819" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:20.639507  375293 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace to be "Ready" ...
	I0108 22:21:22.648411  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:21.933418  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:24.435043  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:25.150951  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:27.650444  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:26.937451  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:27.925059  375556 pod_ready.go:81] duration metric: took 4m0.000207907s waiting for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" ...
	E0108 22:21:27.925103  375556 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6w57p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 22:21:27.925128  375556 pod_ready.go:38] duration metric: took 4m40.430696194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:21:27.925167  375556 kubeadm.go:640] restartCluster took 5m4.814420494s
	W0108 22:21:27.925297  375556 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 22:21:27.925360  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 22:21:30.149112  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:32.149588  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:34.150894  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:36.649733  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:39.151257  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:41.647739  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:43.145693  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (15.220300874s)
	I0108 22:21:43.145789  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:21:43.162489  375556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:21:43.174147  375556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:21:43.184922  375556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:21:43.184985  375556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:21:43.249215  375556 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:21:43.249349  375556 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:21:43.441703  375556 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:21:43.441851  375556 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:21:43.441998  375556 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:21:43.739390  375556 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:21:43.742109  375556 out.go:204]   - Generating certificates and keys ...
	I0108 22:21:43.742213  375556 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:21:43.742298  375556 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:21:43.742469  375556 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 22:21:43.742561  375556 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 22:21:43.742651  375556 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 22:21:43.743428  375556 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 22:21:43.744699  375556 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 22:21:43.746015  375556 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 22:21:43.747206  375556 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 22:21:43.748318  375556 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 22:21:43.749156  375556 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 22:21:43.749237  375556 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:21:43.859844  375556 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:21:44.418300  375556 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:21:44.582066  375556 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:21:44.829395  375556 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:21:44.830276  375556 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:21:44.833494  375556 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:21:44.835724  375556 out.go:204]   - Booting up control plane ...
	I0108 22:21:44.835871  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:21:44.835997  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:21:44.836115  375556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:21:44.858575  375556 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:21:44.859658  375556 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:21:44.859774  375556 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:21:45.004925  375556 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:21:43.648821  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:46.148491  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:48.152137  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:50.649779  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:54.508960  375556 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503706 seconds
	I0108 22:21:54.509100  375556 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:21:54.534526  375556 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:21:55.088263  375556 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:21:55.088497  375556 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-292054 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:21:55.625246  375556 kubeadm.go:322] [bootstrap-token] Using token: ca3oft.99pjh791kq903kea
	I0108 22:21:55.627406  375556 out.go:204]   - Configuring RBAC rules ...
	I0108 22:21:55.627535  375556 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:21:55.635469  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:21:55.658589  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:21:55.664394  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:21:55.670923  375556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:21:55.678315  375556 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:21:55.707544  375556 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:21:56.011289  375556 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:21:56.074068  375556 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:21:56.074122  375556 kubeadm.go:322] 
	I0108 22:21:56.074195  375556 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:21:56.074210  375556 kubeadm.go:322] 
	I0108 22:21:56.074305  375556 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:21:56.074315  375556 kubeadm.go:322] 
	I0108 22:21:56.074346  375556 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:21:56.074474  375556 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:21:56.074550  375556 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:21:56.074560  375556 kubeadm.go:322] 
	I0108 22:21:56.074635  375556 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:21:56.074649  375556 kubeadm.go:322] 
	I0108 22:21:56.074713  375556 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:21:56.074723  375556 kubeadm.go:322] 
	I0108 22:21:56.074810  375556 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:21:56.074933  375556 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:21:56.075027  375556 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:21:56.075037  375556 kubeadm.go:322] 
	I0108 22:21:56.075161  375556 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:21:56.075285  375556 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:21:56.075295  375556 kubeadm.go:322] 
	I0108 22:21:56.075430  375556 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ca3oft.99pjh791kq903kea \
	I0108 22:21:56.075574  375556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:21:56.075612  375556 kubeadm.go:322] 	--control-plane 
	I0108 22:21:56.075621  375556 kubeadm.go:322] 
	I0108 22:21:56.075733  375556 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:21:56.075744  375556 kubeadm.go:322] 
	I0108 22:21:56.075843  375556 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ca3oft.99pjh791kq903kea \
	I0108 22:21:56.075969  375556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:21:56.076235  375556 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:21:56.076281  375556 cni.go:84] Creating CNI manager for ""
	I0108 22:21:56.076299  375556 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:21:56.078385  375556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:21:56.079942  375556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:21:53.149618  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:55.649585  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:57.650103  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:21:56.112245  375556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:21:56.183435  375556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:21:56.183568  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:56.183570  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=default-k8s-diff-port-292054 minikube.k8s.io/updated_at=2024_01_08T22_21_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:56.217296  375556 ops.go:34] apiserver oom_adj: -16
	I0108 22:21:56.721884  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:57.222982  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:57.722219  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:58.222712  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:58.722544  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:59.222082  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:21:59.722808  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.222562  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.722284  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:00.149913  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:02.650967  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:01.222401  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:01.722606  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:02.222313  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:02.722582  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:03.222793  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:03.722359  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:04.222245  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:04.722706  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.222841  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.722871  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:05.148941  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:07.149461  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:06.222648  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:06.722581  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:07.222288  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:07.722274  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.222744  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.722856  375556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:22:08.963467  375556 kubeadm.go:1088] duration metric: took 12.779973028s to wait for elevateKubeSystemPrivileges.
	I0108 22:22:08.963522  375556 kubeadm.go:406] StartCluster complete in 5m45.912753673s
	I0108 22:22:08.963553  375556 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:22:08.963665  375556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:22:08.966435  375556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:22:08.966775  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:22:08.966928  375556 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:22:08.967034  375556 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967075  375556 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:22:08.967095  375556 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.967104  375556 addons.go:246] addon storage-provisioner should already be in state true
	I0108 22:22:08.967152  375556 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967183  375556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-292054"
	I0108 22:22:08.967192  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.967271  375556 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-292054"
	I0108 22:22:08.967300  375556 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.967310  375556 addons.go:246] addon metrics-server should already be in state true
	I0108 22:22:08.967375  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.967667  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967695  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.967756  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967769  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.967779  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.967796  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.986925  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0108 22:22:08.987023  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0108 22:22:08.987549  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.987698  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.988282  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.988313  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.988483  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.988508  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.988606  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0108 22:22:08.989056  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.989111  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.989337  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:08.989834  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.989872  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.990158  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:08.990780  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:08.990796  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:08.991245  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:08.991880  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.991911  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:08.995239  375556 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-292054"
	W0108 22:22:08.995265  375556 addons.go:246] addon default-storageclass should already be in state true
	I0108 22:22:08.995290  375556 host.go:66] Checking if "default-k8s-diff-port-292054" exists ...
	I0108 22:22:08.995820  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:08.995865  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:09.011939  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0108 22:22:09.012468  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.013299  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.013318  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.013724  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.013935  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I0108 22:22:09.014168  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.014906  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.015481  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.015498  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.015842  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.016396  375556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:22:09.016424  375556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:22:09.016659  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.016741  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I0108 22:22:09.019481  375556 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 22:22:09.017701  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.021632  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:22:09.021669  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:22:09.021704  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.022354  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.022387  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.022852  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.023158  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.025362  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.027347  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.029567  375556 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:22:09.027877  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.028367  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.032055  375556 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:22:09.032070  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:22:09.032103  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.032160  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.032368  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.032489  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.032591  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.037266  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.037969  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.038003  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.038588  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.038650  375556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0108 22:22:09.038933  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.039112  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.039299  375556 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:22:09.039313  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.039936  375556 main.go:141] libmachine: Using API Version  1
	I0108 22:22:09.039974  375556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:22:09.040395  375556 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:22:09.040652  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetState
	I0108 22:22:09.042584  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .DriverName
	I0108 22:22:09.043735  375556 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:22:09.043754  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:22:09.043774  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHHostname
	I0108 22:22:09.047511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.047647  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:25:78", ip: ""} in network mk-default-k8s-diff-port-292054: {Iface:virbr4 ExpiryTime:2024-01-08 23:16:06 +0000 UTC Type:0 Mac:52:54:00:8d:25:78 Iaid: IPaddr:192.168.50.18 Prefix:24 Hostname:default-k8s-diff-port-292054 Clientid:01:52:54:00:8d:25:78}
	I0108 22:22:09.047668  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | domain default-k8s-diff-port-292054 has defined IP address 192.168.50.18 and MAC address 52:54:00:8d:25:78 in network mk-default-k8s-diff-port-292054
	I0108 22:22:09.047828  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHPort
	I0108 22:22:09.048115  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHKeyPath
	I0108 22:22:09.048267  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .GetSSHUsername
	I0108 22:22:09.048432  375556 sshutil.go:53] new ssh client: &{IP:192.168.50.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/default-k8s-diff-port-292054/id_rsa Username:docker}
	I0108 22:22:09.273503  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:22:09.286359  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:22:09.286398  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 22:22:09.395127  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:22:09.395521  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:22:09.399318  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:22:09.399351  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:22:09.529413  375556 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-292054" context rescaled to 1 replicas
	I0108 22:22:09.529456  375556 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.18 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:22:09.531970  375556 out.go:177] * Verifying Kubernetes components...
	I0108 22:22:09.533935  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:22:09.608669  375556 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:22:09.608706  375556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:22:09.762095  375556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:22:11.642700  375556 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369133486s)
	I0108 22:22:11.642752  375556 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0108 22:22:12.525251  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.130061811s)
	I0108 22:22:12.525333  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525335  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.129764757s)
	I0108 22:22:12.525352  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.525383  375556 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.99138928s)
	I0108 22:22:12.525439  375556 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-292054" to be "Ready" ...
	I0108 22:22:12.525390  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525511  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.525785  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.525799  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.525810  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.525820  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.526200  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526208  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526224  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.526234  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.526244  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.526250  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526320  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526345  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.526627  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.526640  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.526644  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.600599  375556 node_ready.go:49] node "default-k8s-diff-port-292054" has status "Ready":"True"
	I0108 22:22:12.600630  375556 node_ready.go:38] duration metric: took 75.170013ms waiting for node "default-k8s-diff-port-292054" to be "Ready" ...
	I0108 22:22:12.600642  375556 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:22:12.607695  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.607735  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.608178  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.608205  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.698479  375556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.93630517s)
	I0108 22:22:12.698597  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.698624  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.699090  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.699114  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.699129  375556 main.go:141] libmachine: Making call to close driver server
	I0108 22:22:12.699141  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) Calling .Close
	I0108 22:22:12.699570  375556 main.go:141] libmachine: (default-k8s-diff-port-292054) DBG | Closing plugin on server side
	I0108 22:22:12.699611  375556 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:22:12.699628  375556 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:22:12.699642  375556 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-292054"
	I0108 22:22:12.702579  375556 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 22:22:09.152248  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:11.649021  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:12.704051  375556 addons.go:508] enable addons completed in 3.737129591s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 22:22:12.730733  375556 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.740214  375556 pod_ready.go:92] pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.740241  375556 pod_ready.go:81] duration metric: took 1.009466865s waiting for pod "coredns-5dd5756b68-mgr9p" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.740252  375556 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.749855  375556 pod_ready.go:92] pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.749884  375556 pod_ready.go:81] duration metric: took 9.624914ms waiting for pod "coredns-5dd5756b68-r27zw" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.749897  375556 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.774037  375556 pod_ready.go:92] pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.774082  375556 pod_ready.go:81] duration metric: took 24.173765ms waiting for pod "etcd-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.774099  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.793737  375556 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.793763  375556 pod_ready.go:81] duration metric: took 19.654354ms waiting for pod "kube-apiserver-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.793786  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.802646  375556 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:13.802675  375556 pod_ready.go:81] duration metric: took 8.880262ms waiting for pod "kube-controller-manager-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.802686  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bwmkb" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:14.935671  375556 pod_ready.go:92] pod "kube-proxy-bwmkb" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:14.935701  375556 pod_ready.go:81] duration metric: took 1.133008415s waiting for pod "kube-proxy-bwmkb" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:14.935712  375556 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:15.337751  375556 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace has status "Ready":"True"
	I0108 22:22:15.337785  375556 pod_ready.go:81] duration metric: took 402.065003ms waiting for pod "kube-scheduler-default-k8s-diff-port-292054" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:15.337799  375556 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace to be "Ready" ...
	I0108 22:22:13.651032  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:16.150676  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:17.347997  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:19.848727  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:18.651581  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:21.153888  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:22.348002  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:24.348563  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:23.159095  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:25.648575  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:27.650462  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:26.847900  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:28.848176  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:30.148277  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:32.148917  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:31.353639  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:33.847750  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:34.649869  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:36.650396  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:36.349185  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:38.846642  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:40.851501  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:39.148741  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:41.150479  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:43.348737  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:45.848448  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:43.649911  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:46.149760  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:48.348731  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:50.849503  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:48.648402  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:50.649986  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:53.349307  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:55.349864  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:53.152397  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:55.651270  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:57.652287  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:57.854209  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:00.347211  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:22:59.655447  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:02.151802  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:02.351659  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:04.848930  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:04.650649  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:07.148845  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:06.864466  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:09.349319  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:09.150267  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:11.647897  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:11.350470  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:13.846976  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:13.648246  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:15.653072  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:16.348755  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:18.847624  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:20.850947  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:18.147230  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:20.148799  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:22.150181  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:22.854027  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:25.347172  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:24.648528  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:26.650104  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:27.350880  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:29.847065  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:28.651914  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:31.149983  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:31.849609  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:33.849918  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:35.852770  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:33.648054  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:35.650693  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:38.346376  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:40.347831  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:38.148131  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:40.149293  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:42.151041  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:42.845779  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:44.849417  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:44.655548  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:47.150423  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:46.850811  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:49.347304  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:49.652923  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:52.149820  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:51.348180  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:53.846474  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:55.847511  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:54.649820  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:57.149372  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:57.849233  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:00.348798  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:23:59.154056  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:01.649087  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:02.349247  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:04.350582  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:03.650176  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:06.153560  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:06.848567  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:09.349670  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:08.649461  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:11.149266  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:11.847194  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:13.847282  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:15.849466  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:13.650152  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:15.653477  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:17.849683  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:20.348186  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:18.150536  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:20.650961  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:22.849232  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:25.349020  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:23.149893  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:25.151776  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:27.649498  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:27.848253  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:29.849644  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:29.651074  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:32.151463  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:32.348246  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:34.349140  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:34.650582  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:36.651676  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:36.848220  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:38.848664  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:40.848971  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:39.152183  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:41.648320  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:42.849338  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:45.347960  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:44.150739  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:46.649332  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:47.350030  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:49.847947  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:48.650293  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:50.650602  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:52.344857  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:54.347419  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:53.149776  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:55.150342  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:57.648269  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:56.347866  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:58.350081  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:00.848175  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:24:59.650591  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:02.149598  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:03.349797  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:05.849888  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:04.648771  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:06.651847  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:08.346160  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:10.348673  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:09.149033  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:11.149301  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:12.352279  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:14.846849  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:13.153318  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:15.651109  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:16.849657  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:19.347996  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:18.150751  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:20.650211  375293 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:20.650242  375293 pod_ready.go:81] duration metric: took 4m0.010726332s waiting for pod "metrics-server-57f55c9bc5-qhjlv" in "kube-system" namespace to be "Ready" ...
	E0108 22:25:20.650252  375293 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 22:25:20.650259  375293 pod_ready.go:38] duration metric: took 4m1.939720475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:25:20.650300  375293 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:25:20.650336  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:20.650406  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:20.714451  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:20.714500  375293 cri.go:89] found id: ""
	I0108 22:25:20.714513  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:20.714621  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.720237  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:20.720367  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:20.767857  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:20.767904  375293 cri.go:89] found id: ""
	I0108 22:25:20.767916  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:20.767995  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.772859  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:20.772969  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:20.817193  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:20.817225  375293 cri.go:89] found id: ""
	I0108 22:25:20.817236  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:20.817310  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.824003  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:20.824113  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:20.884204  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:20.884252  375293 cri.go:89] found id: ""
	I0108 22:25:20.884263  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:20.884335  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.889658  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:20.889756  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:20.949423  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:20.949460  375293 cri.go:89] found id: ""
	I0108 22:25:20.949472  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:20.949543  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:20.954856  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:20.954944  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:21.011490  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:21.011538  375293 cri.go:89] found id: ""
	I0108 22:25:21.011551  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:21.011629  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:21.017544  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:21.017638  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:21.066267  375293 cri.go:89] found id: ""
	I0108 22:25:21.066310  375293 logs.go:284] 0 containers: []
	W0108 22:25:21.066322  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:21.066331  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:21.066404  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:21.123537  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:21.123571  375293 cri.go:89] found id: ""
	I0108 22:25:21.123583  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:21.123660  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:21.129269  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:21.129309  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:21.200266  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:21.200308  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:21.246669  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:21.246705  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:21.265861  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:21.265908  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:21.327968  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:21.328016  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:21.386940  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:21.386986  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:21.443896  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:21.443941  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:21.496699  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:21.496746  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:21.962773  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:21.962820  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:22.024288  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:22.024330  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:22.133928  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:22.133976  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:22.301006  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:22.301051  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:21.348655  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:23.350759  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:25.351301  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:24.847470  375293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:25:24.867718  375293 api_server.go:72] duration metric: took 4m8.80605206s to wait for apiserver process to appear ...
	I0108 22:25:24.867750  375293 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:25:24.867788  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:24.867842  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:24.918048  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:24.918090  375293 cri.go:89] found id: ""
	I0108 22:25:24.918104  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:24.918196  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:24.923984  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:24.924096  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:24.981033  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:24.981058  375293 cri.go:89] found id: ""
	I0108 22:25:24.981066  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:24.981116  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:24.985729  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:24.985802  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:25.038522  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:25.038558  375293 cri.go:89] found id: ""
	I0108 22:25:25.038570  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:25.038637  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.043106  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:25.043218  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:25.100189  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:25.100218  375293 cri.go:89] found id: ""
	I0108 22:25:25.100230  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:25.100298  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.107135  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:25.107252  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:25.155243  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:25.155276  375293 cri.go:89] found id: ""
	I0108 22:25:25.155288  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:25.155354  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.160457  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:25.160559  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:25.214754  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:25.214788  375293 cri.go:89] found id: ""
	I0108 22:25:25.214799  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:25.214855  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.219504  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:25.219595  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:25.267255  375293 cri.go:89] found id: ""
	I0108 22:25:25.267302  375293 logs.go:284] 0 containers: []
	W0108 22:25:25.267318  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:25.267329  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:25.267442  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:25.322636  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:25.322668  375293 cri.go:89] found id: ""
	I0108 22:25:25.322679  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:25.322750  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:25.327559  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:25.327592  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:25.396299  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:25.396354  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:25.447121  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:25.447188  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:25.501357  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:25.501413  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:25.572678  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:25.572741  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:25.624203  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:25.624248  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:26.021189  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:26.021250  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:26.122845  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:26.122893  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:26.297704  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:26.297746  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:26.361771  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:26.361826  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:26.422252  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:26.422292  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:26.479602  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:26.479641  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:27.848906  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:30.348452  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:28.997002  375293 api_server.go:253] Checking apiserver healthz at https://192.168.72.132:8443/healthz ...
	I0108 22:25:29.008040  375293 api_server.go:279] https://192.168.72.132:8443/healthz returned 200:
	ok
	I0108 22:25:29.009729  375293 api_server.go:141] control plane version: v1.28.4
	I0108 22:25:29.009758  375293 api_server.go:131] duration metric: took 4.142001296s to wait for apiserver health ...
	I0108 22:25:29.009770  375293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:25:29.009807  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:25:29.009872  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:25:29.064244  375293 cri.go:89] found id: "8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:29.064280  375293 cri.go:89] found id: ""
	I0108 22:25:29.064292  375293 logs.go:284] 1 containers: [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8]
	I0108 22:25:29.064357  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.069801  375293 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:25:29.069900  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:25:29.115294  375293 cri.go:89] found id: "c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:29.115328  375293 cri.go:89] found id: ""
	I0108 22:25:29.115338  375293 logs.go:284] 1 containers: [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7]
	I0108 22:25:29.115426  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.120512  375293 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:25:29.120600  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:25:29.173571  375293 cri.go:89] found id: "9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:29.173600  375293 cri.go:89] found id: ""
	I0108 22:25:29.173609  375293 logs.go:284] 1 containers: [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11]
	I0108 22:25:29.173670  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.179649  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:25:29.179724  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:25:29.230220  375293 cri.go:89] found id: "5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:29.230272  375293 cri.go:89] found id: ""
	I0108 22:25:29.230286  375293 logs.go:284] 1 containers: [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a]
	I0108 22:25:29.230384  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.235437  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:25:29.235540  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:25:29.280861  375293 cri.go:89] found id: "3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:29.280892  375293 cri.go:89] found id: ""
	I0108 22:25:29.280904  375293 logs.go:284] 1 containers: [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c]
	I0108 22:25:29.280974  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.286131  375293 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:25:29.286247  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:25:29.337665  375293 cri.go:89] found id: "ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:29.337700  375293 cri.go:89] found id: ""
	I0108 22:25:29.337711  375293 logs.go:284] 1 containers: [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13]
	I0108 22:25:29.337765  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.343912  375293 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:25:29.344009  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:25:29.400428  375293 cri.go:89] found id: ""
	I0108 22:25:29.400458  375293 logs.go:284] 0 containers: []
	W0108 22:25:29.400466  375293 logs.go:286] No container was found matching "kindnet"
	I0108 22:25:29.400476  375293 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:25:29.400532  375293 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:25:29.458375  375293 cri.go:89] found id: "10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:29.458416  375293 cri.go:89] found id: ""
	I0108 22:25:29.458428  375293 logs.go:284] 1 containers: [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131]
	I0108 22:25:29.458503  375293 ssh_runner.go:195] Run: which crictl
	I0108 22:25:29.464513  375293 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:25:29.464555  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:25:29.809503  375293 logs.go:123] Gathering logs for kubelet ...
	I0108 22:25:29.809550  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:25:29.916786  375293 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:25:29.916864  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:25:30.077876  375293 logs.go:123] Gathering logs for kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] ...
	I0108 22:25:30.077929  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8"
	I0108 22:25:30.139380  375293 logs.go:123] Gathering logs for coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] ...
	I0108 22:25:30.139445  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11"
	I0108 22:25:30.186829  375293 logs.go:123] Gathering logs for kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] ...
	I0108 22:25:30.186861  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a"
	I0108 22:25:30.244185  375293 logs.go:123] Gathering logs for container status ...
	I0108 22:25:30.244230  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:25:30.300429  375293 logs.go:123] Gathering logs for dmesg ...
	I0108 22:25:30.300488  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:25:30.316880  375293 logs.go:123] Gathering logs for etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] ...
	I0108 22:25:30.316920  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7"
	I0108 22:25:30.370537  375293 logs.go:123] Gathering logs for kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] ...
	I0108 22:25:30.370581  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c"
	I0108 22:25:30.419043  375293 logs.go:123] Gathering logs for kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] ...
	I0108 22:25:30.419093  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13"
	I0108 22:25:30.482758  375293 logs.go:123] Gathering logs for storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] ...
	I0108 22:25:30.482804  375293 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131"
	I0108 22:25:33.043083  375293 system_pods.go:59] 8 kube-system pods found
	I0108 22:25:33.043134  375293 system_pods.go:61] "coredns-5dd5756b68-jbz6n" [562faf84-b986-4f0e-97cd-41aa5ac7ea17] Running
	I0108 22:25:33.043139  375293 system_pods.go:61] "etcd-embed-certs-903819" [68146164-7115-4489-8010-32774433564a] Running
	I0108 22:25:33.043143  375293 system_pods.go:61] "kube-apiserver-embed-certs-903819" [367d0612-bd4d-448f-84f2-118afcb9d095] Running
	I0108 22:25:33.043148  375293 system_pods.go:61] "kube-controller-manager-embed-certs-903819" [43c3944a-3dfd-44ce-ba68-baebbced4406] Running
	I0108 22:25:33.043152  375293 system_pods.go:61] "kube-proxy-hqj9b" [14b3f3bd-1d65-4382-adc2-09344b54463d] Running
	I0108 22:25:33.043157  375293 system_pods.go:61] "kube-scheduler-embed-certs-903819" [9c004a9c-c77a-4ee5-970d-db41ddc26439] Running
	I0108 22:25:33.043167  375293 system_pods.go:61] "metrics-server-57f55c9bc5-qhjlv" [f1bff39b-c944-4de0-a5b8-eb239e91c6db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:25:33.043172  375293 system_pods.go:61] "storage-provisioner" [949c6275-6836-4035-89f5-f2d2c2caaa89] Running
	I0108 22:25:33.043180  375293 system_pods.go:74] duration metric: took 4.033402969s to wait for pod list to return data ...
	I0108 22:25:33.043189  375293 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:25:33.047488  375293 default_sa.go:45] found service account: "default"
	I0108 22:25:33.047526  375293 default_sa.go:55] duration metric: took 4.328925ms for default service account to be created ...
	I0108 22:25:33.047540  375293 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:25:33.055793  375293 system_pods.go:86] 8 kube-system pods found
	I0108 22:25:33.055824  375293 system_pods.go:89] "coredns-5dd5756b68-jbz6n" [562faf84-b986-4f0e-97cd-41aa5ac7ea17] Running
	I0108 22:25:33.055829  375293 system_pods.go:89] "etcd-embed-certs-903819" [68146164-7115-4489-8010-32774433564a] Running
	I0108 22:25:33.055834  375293 system_pods.go:89] "kube-apiserver-embed-certs-903819" [367d0612-bd4d-448f-84f2-118afcb9d095] Running
	I0108 22:25:33.055838  375293 system_pods.go:89] "kube-controller-manager-embed-certs-903819" [43c3944a-3dfd-44ce-ba68-baebbced4406] Running
	I0108 22:25:33.055841  375293 system_pods.go:89] "kube-proxy-hqj9b" [14b3f3bd-1d65-4382-adc2-09344b54463d] Running
	I0108 22:25:33.055845  375293 system_pods.go:89] "kube-scheduler-embed-certs-903819" [9c004a9c-c77a-4ee5-970d-db41ddc26439] Running
	I0108 22:25:33.055852  375293 system_pods.go:89] "metrics-server-57f55c9bc5-qhjlv" [f1bff39b-c944-4de0-a5b8-eb239e91c6db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:25:33.055859  375293 system_pods.go:89] "storage-provisioner" [949c6275-6836-4035-89f5-f2d2c2caaa89] Running
	I0108 22:25:33.055872  375293 system_pods.go:126] duration metric: took 8.323722ms to wait for k8s-apps to be running ...
	I0108 22:25:33.055881  375293 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:25:33.055939  375293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:25:33.074598  375293 system_svc.go:56] duration metric: took 18.695286ms WaitForService to wait for kubelet.
	I0108 22:25:33.074637  375293 kubeadm.go:581] duration metric: took 4m17.012976103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:25:33.074671  375293 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:25:33.079188  375293 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:25:33.079227  375293 node_conditions.go:123] node cpu capacity is 2
	I0108 22:25:33.079246  375293 node_conditions.go:105] duration metric: took 4.559946ms to run NodePressure ...
	I0108 22:25:33.079261  375293 start.go:228] waiting for startup goroutines ...
	I0108 22:25:33.079270  375293 start.go:233] waiting for cluster config update ...
	I0108 22:25:33.079283  375293 start.go:242] writing updated cluster config ...
	I0108 22:25:33.079792  375293 ssh_runner.go:195] Run: rm -f paused
	I0108 22:25:33.144148  375293 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:25:33.146897  375293 out.go:177] * Done! kubectl is now configured to use "embed-certs-903819" cluster and "default" namespace by default
	I0108 22:25:32.349693  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:34.845955  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:36.851909  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:39.348575  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:41.350957  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:43.848565  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:46.348360  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:48.847346  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:51.346764  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:53.849331  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:56.349683  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:25:58.350457  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:00.847803  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:03.352522  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:05.844769  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:07.846346  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:09.848453  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:11.850250  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:14.347576  375556 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:26:15.349616  375556 pod_ready.go:81] duration metric: took 4m0.011802861s waiting for pod "metrics-server-57f55c9bc5-jm9lg" in "kube-system" namespace to be "Ready" ...
	E0108 22:26:15.349643  375556 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 22:26:15.349651  375556 pod_ready.go:38] duration metric: took 4m2.748998751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:26:15.349666  375556 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:26:15.349720  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:15.349773  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:15.414233  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:15.414273  375556 cri.go:89] found id: ""
	I0108 22:26:15.414286  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:15.414367  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.421348  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:15.421439  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:15.480484  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:15.480508  375556 cri.go:89] found id: ""
	I0108 22:26:15.480517  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:15.480569  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.486049  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:15.486125  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:15.551549  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:15.551588  375556 cri.go:89] found id: ""
	I0108 22:26:15.551600  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:15.551665  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.556950  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:15.557035  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:15.607375  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:15.607417  375556 cri.go:89] found id: ""
	I0108 22:26:15.607433  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:15.607530  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.613182  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:15.613253  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:15.663780  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:15.663805  375556 cri.go:89] found id: ""
	I0108 22:26:15.663813  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:15.663882  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.668629  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:15.668748  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:15.722341  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:15.722370  375556 cri.go:89] found id: ""
	I0108 22:26:15.722380  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:15.722453  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.727974  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:15.728089  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:15.782298  375556 cri.go:89] found id: ""
	I0108 22:26:15.782331  375556 logs.go:284] 0 containers: []
	W0108 22:26:15.782349  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:15.782358  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:15.782436  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:15.836150  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:15.836194  375556 cri.go:89] found id: ""
	I0108 22:26:15.836207  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:15.836307  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:15.842152  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:15.842184  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:15.900314  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:15.900378  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:15.974860  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:15.974903  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:16.021465  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:16.021529  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:16.477647  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:16.477706  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:16.588562  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:16.588615  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:16.604310  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:16.604383  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:16.770738  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:16.770778  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:16.835271  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:16.835320  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:16.899297  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:16.899354  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:16.957508  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:16.957549  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:17.001214  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:17.001255  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:19.561271  375556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:26:19.578731  375556 api_server.go:72] duration metric: took 4m10.049236985s to wait for apiserver process to appear ...
	I0108 22:26:19.578768  375556 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:26:19.578821  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:19.578897  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:19.630380  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:19.630410  375556 cri.go:89] found id: ""
	I0108 22:26:19.630422  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:19.630496  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.635902  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:19.635998  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:19.682023  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:19.682057  375556 cri.go:89] found id: ""
	I0108 22:26:19.682072  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:19.682143  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.688443  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:19.688567  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:19.738612  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:19.738651  375556 cri.go:89] found id: ""
	I0108 22:26:19.738664  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:19.738790  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.745590  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:19.745726  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:19.796647  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:19.796674  375556 cri.go:89] found id: ""
	I0108 22:26:19.796685  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:19.796747  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.801789  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:19.801872  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:19.846026  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:19.846060  375556 cri.go:89] found id: ""
	I0108 22:26:19.846070  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:19.846150  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.851227  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:19.851299  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:19.906135  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:19.906173  375556 cri.go:89] found id: ""
	I0108 22:26:19.906184  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:19.906267  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:19.911914  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:19.912048  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:19.960064  375556 cri.go:89] found id: ""
	I0108 22:26:19.960104  375556 logs.go:284] 0 containers: []
	W0108 22:26:19.960117  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:19.960126  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:19.960198  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:20.010136  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:20.010171  375556 cri.go:89] found id: ""
	I0108 22:26:20.010181  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:20.010256  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:20.015368  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:20.015402  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:20.122508  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:20.122575  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:20.272565  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:20.272610  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:20.335281  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:20.335334  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:20.384028  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:20.384088  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:20.779192  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:20.779250  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:20.795137  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:20.795170  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:20.863312  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:20.863395  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:20.918084  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:20.918132  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:20.966066  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:20.966108  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:21.030610  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:21.030704  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:21.083525  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:21.083567  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:23.662287  375556 api_server.go:253] Checking apiserver healthz at https://192.168.50.18:8444/healthz ...
	I0108 22:26:23.671857  375556 api_server.go:279] https://192.168.50.18:8444/healthz returned 200:
	ok
	I0108 22:26:23.673883  375556 api_server.go:141] control plane version: v1.28.4
	I0108 22:26:23.673919  375556 api_server.go:131] duration metric: took 4.095141482s to wait for apiserver health ...
	I0108 22:26:23.673932  375556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:26:23.673967  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:26:23.674045  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:26:23.733069  375556 cri.go:89] found id: "491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:23.733098  375556 cri.go:89] found id: ""
	I0108 22:26:23.733109  375556 logs.go:284] 1 containers: [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348]
	I0108 22:26:23.733168  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.739866  375556 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:26:23.739960  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:26:23.807666  375556 cri.go:89] found id: "bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:23.807693  375556 cri.go:89] found id: ""
	I0108 22:26:23.807704  375556 logs.go:284] 1 containers: [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf]
	I0108 22:26:23.807765  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.813449  375556 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:26:23.813543  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:26:23.876403  375556 cri.go:89] found id: "a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:23.876431  375556 cri.go:89] found id: ""
	I0108 22:26:23.876442  375556 logs.go:284] 1 containers: [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570]
	I0108 22:26:23.876511  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.885128  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:26:23.885232  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:26:23.953100  375556 cri.go:89] found id: "87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:23.953129  375556 cri.go:89] found id: ""
	I0108 22:26:23.953139  375556 logs.go:284] 1 containers: [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4]
	I0108 22:26:23.953211  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:23.960146  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:26:23.960246  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:26:24.022581  375556 cri.go:89] found id: "6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:24.022608  375556 cri.go:89] found id: ""
	I0108 22:26:24.022616  375556 logs.go:284] 1 containers: [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6]
	I0108 22:26:24.022669  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.029307  375556 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:26:24.029399  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:26:24.088026  375556 cri.go:89] found id: "3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:24.088063  375556 cri.go:89] found id: ""
	I0108 22:26:24.088074  375556 logs.go:284] 1 containers: [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d]
	I0108 22:26:24.088151  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.094051  375556 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:26:24.094175  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:26:24.156867  375556 cri.go:89] found id: ""
	I0108 22:26:24.156902  375556 logs.go:284] 0 containers: []
	W0108 22:26:24.156914  375556 logs.go:286] No container was found matching "kindnet"
	I0108 22:26:24.156924  375556 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 22:26:24.157020  375556 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 22:26:24.219558  375556 cri.go:89] found id: "37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:24.219581  375556 cri.go:89] found id: ""
	I0108 22:26:24.219589  375556 logs.go:284] 1 containers: [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c]
	I0108 22:26:24.219641  375556 ssh_runner.go:195] Run: which crictl
	I0108 22:26:24.224823  375556 logs.go:123] Gathering logs for kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] ...
	I0108 22:26:24.224866  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d"
	I0108 22:26:24.321726  375556 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:26:24.321777  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:26:24.749669  375556 logs.go:123] Gathering logs for etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] ...
	I0108 22:26:24.749737  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf"
	I0108 22:26:24.821645  375556 logs.go:123] Gathering logs for kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] ...
	I0108 22:26:24.821690  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4"
	I0108 22:26:24.883279  375556 logs.go:123] Gathering logs for kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] ...
	I0108 22:26:24.883325  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6"
	I0108 22:26:24.942199  375556 logs.go:123] Gathering logs for kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] ...
	I0108 22:26:24.942253  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348"
	I0108 22:26:25.003721  375556 logs.go:123] Gathering logs for coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] ...
	I0108 22:26:25.003766  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570"
	I0108 22:26:25.051208  375556 logs.go:123] Gathering logs for storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] ...
	I0108 22:26:25.051241  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c"
	I0108 22:26:25.102533  375556 logs.go:123] Gathering logs for container status ...
	I0108 22:26:25.102580  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:26:25.158556  375556 logs.go:123] Gathering logs for kubelet ...
	I0108 22:26:25.158610  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:26:25.263571  375556 logs.go:123] Gathering logs for dmesg ...
	I0108 22:26:25.263618  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:26:25.281380  375556 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:26:25.281414  375556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:26:27.948731  375556 system_pods.go:59] 8 kube-system pods found
	I0108 22:26:27.948767  375556 system_pods.go:61] "coredns-5dd5756b68-r27zw" [c82dae88-118a-4e13-a714-1240d48dfc4e] Running
	I0108 22:26:27.948774  375556 system_pods.go:61] "etcd-default-k8s-diff-port-292054" [d8145b74-cc40-40eb-b9e2-5a19e096e5f7] Running
	I0108 22:26:27.948782  375556 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-292054" [5bb945e6-e633-4fdc-bbec-16c72cb3ca88] Running
	I0108 22:26:27.948787  375556 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-292054" [8d376b79-f3ab-4f74-a927-e3f1775853c0] Running
	I0108 22:26:27.948794  375556 system_pods.go:61] "kube-proxy-bwmkb" [c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2] Running
	I0108 22:26:27.948800  375556 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-292054" [d125cdbe-49e2-48af-bcf8-44d514cd4a1c] Running
	I0108 22:26:27.948811  375556 system_pods.go:61] "metrics-server-57f55c9bc5-jm9lg" [b94afab5-f573-4ed1-bc29-64eb8e90c574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:26:27.948827  375556 system_pods.go:61] "storage-provisioner" [05c2430d-d84e-415e-83b3-c32e7635fe74] Running
	I0108 22:26:27.948839  375556 system_pods.go:74] duration metric: took 4.274897836s to wait for pod list to return data ...
	I0108 22:26:27.948852  375556 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:26:27.952207  375556 default_sa.go:45] found service account: "default"
	I0108 22:26:27.952241  375556 default_sa.go:55] duration metric: took 3.378283ms for default service account to be created ...
	I0108 22:26:27.952252  375556 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:26:27.958708  375556 system_pods.go:86] 8 kube-system pods found
	I0108 22:26:27.958744  375556 system_pods.go:89] "coredns-5dd5756b68-r27zw" [c82dae88-118a-4e13-a714-1240d48dfc4e] Running
	I0108 22:26:27.958752  375556 system_pods.go:89] "etcd-default-k8s-diff-port-292054" [d8145b74-cc40-40eb-b9e2-5a19e096e5f7] Running
	I0108 22:26:27.958757  375556 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-292054" [5bb945e6-e633-4fdc-bbec-16c72cb3ca88] Running
	I0108 22:26:27.958763  375556 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-292054" [8d376b79-f3ab-4f74-a927-e3f1775853c0] Running
	I0108 22:26:27.958767  375556 system_pods.go:89] "kube-proxy-bwmkb" [c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2] Running
	I0108 22:26:27.958772  375556 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-292054" [d125cdbe-49e2-48af-bcf8-44d514cd4a1c] Running
	I0108 22:26:27.958849  375556 system_pods.go:89] "metrics-server-57f55c9bc5-jm9lg" [b94afab5-f573-4ed1-bc29-64eb8e90c574] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:26:27.958860  375556 system_pods.go:89] "storage-provisioner" [05c2430d-d84e-415e-83b3-c32e7635fe74] Running
	I0108 22:26:27.958870  375556 system_pods.go:126] duration metric: took 6.613305ms to wait for k8s-apps to be running ...
	I0108 22:26:27.958892  375556 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:26:27.958967  375556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:26:27.979435  375556 system_svc.go:56] duration metric: took 20.53748ms WaitForService to wait for kubelet.
	I0108 22:26:27.979474  375556 kubeadm.go:581] duration metric: took 4m18.449992338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:26:27.979500  375556 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:26:27.983117  375556 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:26:27.983146  375556 node_conditions.go:123] node cpu capacity is 2
	I0108 22:26:27.983159  375556 node_conditions.go:105] duration metric: took 3.652979ms to run NodePressure ...
	I0108 22:26:27.983171  375556 start.go:228] waiting for startup goroutines ...
	I0108 22:26:27.983177  375556 start.go:233] waiting for cluster config update ...
	I0108 22:26:27.983187  375556 start.go:242] writing updated cluster config ...
	I0108 22:26:27.983521  375556 ssh_runner.go:195] Run: rm -f paused
	I0108 22:26:28.042279  375556 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:26:28.044728  375556 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-292054" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:16:28 UTC, ends at Mon 2024-01-08 22:35:03 UTC. --
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.206331517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753303206299906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=becb93fb-d3b5-41bb-a73b-3b20fdb68723 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.207996637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c6d970fe-2063-4293-a4bf-da9cb83cd7fe name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.208071766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c6d970fe-2063-4293-a4bf-da9cb83cd7fe name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.208310039Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752259920851228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d8a800f2a2b2f67d1ea5e05ba4caff1f20a555e2af9dd6eadddc72619ba876,PodSandboxId:1886af1d1e6dcb202dab4ff33f61644a22ee706cd53e1c1cdd936c0b788dc54a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704752233573056054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc706965-4d2e-4bd5-a1c1-0616462e9840,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8a5331,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17,PodSandboxId:4bfe2f5311e83d5fe56a101d85af06bc3658e6014ba7457c593937a6db200d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704752232427113478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fzlzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f48e2d6f-a573-463f-b96e-9f96b3161d66,},Annotations:map[string]string{io.kubernetes.container.hash: 74221cba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420,PodSandboxId:cde50b732d649161d9432de65749fe4aef982535d64a8b6dbec2a514de5aae98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704752230515710640,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f37e50-5c82-4288-8cf8
-cb1c576c7472,},Annotations:map[string]string{io.kubernetes.container.hash: 8f345252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704752229007568201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2
fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434,PodSandboxId:739ce810388b681fba1c9d1c993e89e06fc980d56fa3567bbdc2d1972fc9cb9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704752222645618242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39784c5a6adcc95506cfe25e9403f5d5,},Annotations:map[string]string{io.kube
rnetes.container.hash: be9ca32a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab,PodSandboxId:b709e0e02c865c6ac430c5bc6e4e9d3ce8a60c668ee4357037f895e5960ddba6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704752221051594376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592,PodSandboxId:8405cb736193700954fb2c65b085a476d7242091862e396b26750258a7a86cc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704752220967152669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b45d8726bdb80fb0dada6f51c1b17e,},Annotations:map[string]string{io.kubernetes.container.hash:
6ea9e46e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61,PodSandboxId:13e6538c94ae420201ad07f99e833b351472b7b05f1364053536309d1362a05d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704752220829540553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c6d970fe-2063-4293-a4bf-da9cb83cd7fe name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.258139428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8a86216b-f586-486a-80dc-601d81fc8350 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.258247387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8a86216b-f586-486a-80dc-601d81fc8350 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.260270272Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2c4ac3e4-77f5-4426-a908-6ecda5199524 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.260821371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753303260798200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=2c4ac3e4-77f5-4426-a908-6ecda5199524 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.261690764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6c061653-01f2-4528-b6a2-dd53b8e7d7c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.261750773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6c061653-01f2-4528-b6a2-dd53b8e7d7c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.262113661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752259920851228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d8a800f2a2b2f67d1ea5e05ba4caff1f20a555e2af9dd6eadddc72619ba876,PodSandboxId:1886af1d1e6dcb202dab4ff33f61644a22ee706cd53e1c1cdd936c0b788dc54a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704752233573056054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc706965-4d2e-4bd5-a1c1-0616462e9840,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8a5331,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17,PodSandboxId:4bfe2f5311e83d5fe56a101d85af06bc3658e6014ba7457c593937a6db200d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704752232427113478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fzlzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f48e2d6f-a573-463f-b96e-9f96b3161d66,},Annotations:map[string]string{io.kubernetes.container.hash: 74221cba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420,PodSandboxId:cde50b732d649161d9432de65749fe4aef982535d64a8b6dbec2a514de5aae98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704752230515710640,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f37e50-5c82-4288-8cf8
-cb1c576c7472,},Annotations:map[string]string{io.kubernetes.container.hash: 8f345252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704752229007568201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2
fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434,PodSandboxId:739ce810388b681fba1c9d1c993e89e06fc980d56fa3567bbdc2d1972fc9cb9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704752222645618242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39784c5a6adcc95506cfe25e9403f5d5,},Annotations:map[string]string{io.kube
rnetes.container.hash: be9ca32a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab,PodSandboxId:b709e0e02c865c6ac430c5bc6e4e9d3ce8a60c668ee4357037f895e5960ddba6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704752221051594376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592,PodSandboxId:8405cb736193700954fb2c65b085a476d7242091862e396b26750258a7a86cc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704752220967152669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b45d8726bdb80fb0dada6f51c1b17e,},Annotations:map[string]string{io.kubernetes.container.hash:
6ea9e46e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61,PodSandboxId:13e6538c94ae420201ad07f99e833b351472b7b05f1364053536309d1362a05d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704752220829540553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6c061653-01f2-4528-b6a2-dd53b8e7d7c3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.311257312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=96aec675-68cb-4aee-9364-57ea9abbc744 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.311351493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=96aec675-68cb-4aee-9364-57ea9abbc744 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.313394753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=47d48264-3fe7-42dd-8962-20d5b2dc81ef name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.313808251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753303313794534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=47d48264-3fe7-42dd-8962-20d5b2dc81ef name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.314479611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e9ec1efc-f463-445b-b4fd-2760b3c6baa4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.314570388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e9ec1efc-f463-445b-b4fd-2760b3c6baa4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.314804250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752259920851228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d8a800f2a2b2f67d1ea5e05ba4caff1f20a555e2af9dd6eadddc72619ba876,PodSandboxId:1886af1d1e6dcb202dab4ff33f61644a22ee706cd53e1c1cdd936c0b788dc54a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704752233573056054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc706965-4d2e-4bd5-a1c1-0616462e9840,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8a5331,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17,PodSandboxId:4bfe2f5311e83d5fe56a101d85af06bc3658e6014ba7457c593937a6db200d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704752232427113478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fzlzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f48e2d6f-a573-463f-b96e-9f96b3161d66,},Annotations:map[string]string{io.kubernetes.container.hash: 74221cba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420,PodSandboxId:cde50b732d649161d9432de65749fe4aef982535d64a8b6dbec2a514de5aae98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704752230515710640,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f37e50-5c82-4288-8cf8
-cb1c576c7472,},Annotations:map[string]string{io.kubernetes.container.hash: 8f345252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704752229007568201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2
fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434,PodSandboxId:739ce810388b681fba1c9d1c993e89e06fc980d56fa3567bbdc2d1972fc9cb9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704752222645618242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39784c5a6adcc95506cfe25e9403f5d5,},Annotations:map[string]string{io.kube
rnetes.container.hash: be9ca32a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab,PodSandboxId:b709e0e02c865c6ac430c5bc6e4e9d3ce8a60c668ee4357037f895e5960ddba6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704752221051594376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592,PodSandboxId:8405cb736193700954fb2c65b085a476d7242091862e396b26750258a7a86cc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704752220967152669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b45d8726bdb80fb0dada6f51c1b17e,},Annotations:map[string]string{io.kubernetes.container.hash:
6ea9e46e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61,PodSandboxId:13e6538c94ae420201ad07f99e833b351472b7b05f1364053536309d1362a05d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704752220829540553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e9ec1efc-f463-445b-b4fd-2760b3c6baa4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.357540453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=232b2e90-d652-4886-bdc5-f075adec3bc4 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.357626407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=232b2e90-d652-4886-bdc5-f075adec3bc4 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.359342992Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f80840ff-e1ca-4817-9ce5-30abe694b49f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.359847362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753303359824198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=f80840ff-e1ca-4817-9ce5-30abe694b49f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.360829401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d6ea01b-cff8-4ddd-8d5b-e12d4fe15b77 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.360906686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d6ea01b-cff8-4ddd-8d5b-e12d4fe15b77 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:03 old-k8s-version-079759 crio[716]: time="2024-01-08 22:35:03.361225640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752259920851228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66d8a800f2a2b2f67d1ea5e05ba4caff1f20a555e2af9dd6eadddc72619ba876,PodSandboxId:1886af1d1e6dcb202dab4ff33f61644a22ee706cd53e1c1cdd936c0b788dc54a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704752233573056054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc706965-4d2e-4bd5-a1c1-0616462e9840,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8a5331,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17,PodSandboxId:4bfe2f5311e83d5fe56a101d85af06bc3658e6014ba7457c593937a6db200d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704752232427113478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fzlzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f48e2d6f-a573-463f-b96e-9f96b3161d66,},Annotations:map[string]string{io.kubernetes.container.hash: 74221cba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420,PodSandboxId:cde50b732d649161d9432de65749fe4aef982535d64a8b6dbec2a514de5aae98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704752230515710640,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfs65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73f37e50-5c82-4288-8cf8
-cb1c576c7472,},Annotations:map[string]string{io.kubernetes.container.hash: 8f345252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f,PodSandboxId:2e411ea59ae3707f7e2d4fd8dc82ba4e3d4e0c0563a876042a3df5b49ed7e048,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704752229007568201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9c660-a79f-43a4-942c-2
fc4f3c8ff32,},Annotations:map[string]string{io.kubernetes.container.hash: 4bf2ad64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434,PodSandboxId:739ce810388b681fba1c9d1c993e89e06fc980d56fa3567bbdc2d1972fc9cb9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704752222645618242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39784c5a6adcc95506cfe25e9403f5d5,},Annotations:map[string]string{io.kube
rnetes.container.hash: be9ca32a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab,PodSandboxId:b709e0e02c865c6ac430c5bc6e4e9d3ce8a60c668ee4357037f895e5960ddba6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704752221051594376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592,PodSandboxId:8405cb736193700954fb2c65b085a476d7242091862e396b26750258a7a86cc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704752220967152669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38b45d8726bdb80fb0dada6f51c1b17e,},Annotations:map[string]string{io.kubernetes.container.hash:
6ea9e46e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61,PodSandboxId:13e6538c94ae420201ad07f99e833b351472b7b05f1364053536309d1362a05d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704752220829540553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-079759,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d6ea01b-cff8-4ddd-8d5b-e12d4fe15b77 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e59f9dbead2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       1                   2e411ea59ae37       storage-provisioner
	66d8a800f2a2b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago      Running             busybox                   0                   1886af1d1e6dc       busybox
	f11644eb8c5e5       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      17 minutes ago      Running             coredns                   0                   4bfe2f5311e83       coredns-5644d7b6d9-fzlzx
	d6357e946560e       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      17 minutes ago      Running             kube-proxy                0                   cde50b732d649       kube-proxy-mfs65
	4adf6d6ad1709       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       0                   2e411ea59ae37       storage-provisioner
	37878737b7049       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   739ce810388b6       etcd-old-k8s-version-079759
	f2a5eecdb0c68       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   b709e0e02c865       kube-scheduler-old-k8s-version-079759
	26d6552f76c38       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   8405cb7361937       kube-apiserver-old-k8s-version-079759
	bcbc4b306a60a       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   13e6538c94ae4       kube-controller-manager-old-k8s-version-079759
	
	
	==> coredns [f11644eb8c5e507c66727193d418ed1049e5835c96940bcea8d622d5cd247c17] <==
	E0108 22:07:57.615159       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=478&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639459       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=482&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639698       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:15.456904       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2024-01-08T22:07:23.769Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	2024-01-08T22:07:23.800Z [INFO] 127.0.0.1:42162 - 57998 "HINFO IN 7314273592572006048.1780055050944407881. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030508662s
	E0108 22:07:57.615159       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=478&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.615159       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=478&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.615159       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=478&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639459       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=482&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639459       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=482&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639459       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=482&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639698       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639698       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0108 22:07:57.639698       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-08T22:17:12.800Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-08T22:17:12.800Z [INFO] CoreDNS-1.6.2
	2024-01-08T22:17:12.800Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-08T22:17:12.845Z [INFO] 127.0.0.1:56141 - 45734 "HINFO IN 6412304003339905310.5487353666919062536. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044314845s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-079759
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-079759
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=old-k8s-version-079759
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_06_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:06:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:34:39 +0000   Mon, 08 Jan 2024 22:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:34:39 +0000   Mon, 08 Jan 2024 22:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:34:39 +0000   Mon, 08 Jan 2024 22:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:34:39 +0000   Mon, 08 Jan 2024 22:17:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    old-k8s-version-079759
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 a54b7c7cd22d472991831b6fcc8e5a4e
	 System UUID:                a54b7c7c-d22d-4729-9183-1b6fcc8e5a4e
	 Boot ID:                    0790ceb3-d2f6-4d4f-b3d6-8760fffda9df
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                coredns-5644d7b6d9-fzlzx                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                etcd-old-k8s-version-079759                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-apiserver-old-k8s-version-079759             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-079759    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                kube-proxy-mfs65                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-scheduler-old-k8s-version-079759             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                metrics-server-74d5856cc6-sdlnw                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kube-proxy, old-k8s-version-079759  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-079759     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x7 over 18m)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet, old-k8s-version-079759     Node old-k8s-version-079759 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-079759     Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kube-proxy, old-k8s-version-079759  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 8 22:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077820] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.134270] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.734516] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.177995] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.755367] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.913808] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.130350] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.183623] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.132893] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.281141] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +20.233746] systemd-fstab-generator[1028]: Ignoring "noauto" for root device
	[  +0.519162] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 8 22:17] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [37878737b7049c81bbdabe18c753751cc81ee596cf8b23ce0de3a55e10c7a434] <==
	2024-01-08 22:17:02.759016 I | etcdserver: restarting member f87838631c8138de in cluster 2dc4003dc2fbf749 at commit index 520
	2024-01-08 22:17:02.759375 I | raft: f87838631c8138de became follower at term 2
	2024-01-08 22:17:02.759490 I | raft: newRaft f87838631c8138de [peers: [], term: 2, commit: 520, applied: 0, lastindex: 520, lastterm: 2]
	2024-01-08 22:17:02.776353 W | auth: simple token is not cryptographically signed
	2024-01-08 22:17:02.780321 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-08 22:17:02.782618 I | etcdserver/membership: added member f87838631c8138de [https://192.168.39.183:2380] to cluster 2dc4003dc2fbf749
	2024-01-08 22:17:02.782712 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 22:17:02.783113 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-08 22:17:02.783238 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-08 22:17:02.783522 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-08 22:17:02.783802 I | embed: listening for metrics on http://192.168.39.183:2381
	2024-01-08 22:17:04.560584 I | raft: f87838631c8138de is starting a new election at term 2
	2024-01-08 22:17:04.560622 I | raft: f87838631c8138de became candidate at term 3
	2024-01-08 22:17:04.560638 I | raft: f87838631c8138de received MsgVoteResp from f87838631c8138de at term 3
	2024-01-08 22:17:04.560650 I | raft: f87838631c8138de became leader at term 3
	2024-01-08 22:17:04.560656 I | raft: raft.node: f87838631c8138de elected leader f87838631c8138de at term 3
	2024-01-08 22:17:04.563698 I | embed: ready to serve client requests
	2024-01-08 22:17:04.564468 I | etcdserver: published {Name:old-k8s-version-079759 ClientURLs:[https://192.168.39.183:2379]} to cluster 2dc4003dc2fbf749
	2024-01-08 22:17:04.564640 I | embed: ready to serve client requests
	2024-01-08 22:17:04.565648 I | embed: serving client requests on 192.168.39.183:2379
	2024-01-08 22:17:04.566107 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 22:27:04.594734 I | mvcc: store.index: compact 803
	2024-01-08 22:27:04.598274 I | mvcc: finished scheduled compaction at 803 (took 2.439479ms)
	2024-01-08 22:32:04.606495 I | mvcc: store.index: compact 1022
	2024-01-08 22:32:04.608790 I | mvcc: finished scheduled compaction at 1022 (took 1.200374ms)
	
	
	==> kernel <==
	 22:35:03 up 18 min,  0 users,  load average: 0.17, 0.24, 0.19
	Linux old-k8s-version-079759 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [26d6552f76c38aef19a49c439d43bbf7af399334103add4c291eeb5f82981592] <==
	I0108 22:27:09.211397       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:27:09.211892       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:27:09.212157       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:27:09.212206       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:28:09.212531       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:28:09.212768       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:28:09.212873       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:28:09.212913       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:30:09.213656       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:30:09.213781       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:30:09.214004       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:30:09.214071       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:32:09.216352       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:32:09.216518       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:32:09.216603       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:32:09.216614       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:33:09.217296       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 22:33:09.217548       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 22:33:09.217741       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:33:09.217755       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bcbc4b306a60a9e16aba58f19f7a5d06ad14ef8b9c077309d7b1868eed1bdb61] <==
	E0108 22:28:32.029222       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:28:40.603012       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:29:02.282311       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:29:12.605035       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:29:32.538900       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:29:44.607679       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:30:02.791626       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:30:16.610330       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:30:33.044435       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:30:48.613220       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:31:03.297094       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:31:20.615759       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:31:33.549885       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:31:52.618169       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:32:03.802730       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:32:24.620290       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:32:34.055284       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:32:56.622466       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:33:04.308119       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:33:28.625521       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:33:34.560436       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:34:00.628658       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:34:04.813214       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 22:34:32.631785       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 22:34:35.065338       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [d6357e946560eb5e38c34ed44acdcd8e02fe60ca6cb8a6da0cc432fb83185420] <==
	W0108 22:06:46.668750       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 22:06:46.684160       1 node.go:135] Successfully retrieved node IP: 192.168.39.183
	I0108 22:06:46.684559       1 server_others.go:149] Using iptables Proxier.
	I0108 22:06:46.685373       1 server.go:529] Version: v1.16.0
	I0108 22:06:46.691810       1 config.go:313] Starting service config controller
	I0108 22:06:46.691888       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 22:06:46.691920       1 config.go:131] Starting endpoints config controller
	I0108 22:06:46.691955       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 22:06:46.797198       1 shared_informer.go:204] Caches are synced for service config 
	I0108 22:06:46.797340       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0108 22:17:10.730439       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 22:17:10.743302       1 node.go:135] Successfully retrieved node IP: 192.168.39.183
	I0108 22:17:10.743368       1 server_others.go:149] Using iptables Proxier.
	I0108 22:17:10.744060       1 server.go:529] Version: v1.16.0
	I0108 22:17:10.745785       1 config.go:131] Starting endpoints config controller
	I0108 22:17:10.745849       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 22:17:10.746175       1 config.go:313] Starting service config controller
	I0108 22:17:10.746222       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 22:17:10.846841       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0108 22:17:10.849109       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [f2a5eecdb0c68151594227c89e049e402f558d9d9cffef358259419645a9e8ab] <==
	E0108 22:06:23.068050       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:06:23.074906       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:06:24.065000       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:06:24.066937       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:06:24.069589       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:06:24.069684       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:06:24.071043       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:06:24.072070       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:06:24.073780       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:06:24.074768       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:06:24.076525       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:06:24.080024       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:06:24.080662       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:06:42.931136       1 factory.go:585] pod is already present in the activeQ
	I0108 22:17:02.255631       1 serving.go:319] Generated self-signed cert in-memory
	W0108 22:17:08.134473       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 22:17:08.134696       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:17:08.134729       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 22:17:08.134830       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 22:17:08.143399       1 server.go:143] Version: v1.16.0
	I0108 22:17:08.147227       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0108 22:17:08.156754       1 authorization.go:47] Authorization is disabled
	W0108 22:17:08.157027       1 authentication.go:79] Authentication is disabled
	I0108 22:17:08.161064       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0108 22:17:08.171336       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:16:28 UTC, ends at Mon 2024-01-08 22:35:04 UTC. --
	Jan 08 22:30:26 old-k8s-version-079759 kubelet[1034]: E0108 22:30:26.681513    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:30:37 old-k8s-version-079759 kubelet[1034]: E0108 22:30:37.680611    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:30:51 old-k8s-version-079759 kubelet[1034]: E0108 22:30:51.682370    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:31:03 old-k8s-version-079759 kubelet[1034]: E0108 22:31:03.680986    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:31:16 old-k8s-version-079759 kubelet[1034]: E0108 22:31:16.681912    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:31:30 old-k8s-version-079759 kubelet[1034]: E0108 22:31:30.681287    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:31:45 old-k8s-version-079759 kubelet[1034]: E0108 22:31:45.681102    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:31:56 old-k8s-version-079759 kubelet[1034]: E0108 22:31:56.681591    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:31:59 old-k8s-version-079759 kubelet[1034]: E0108 22:31:59.762371    1034 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 08 22:32:11 old-k8s-version-079759 kubelet[1034]: E0108 22:32:11.681154    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:32:23 old-k8s-version-079759 kubelet[1034]: E0108 22:32:23.681465    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:32:35 old-k8s-version-079759 kubelet[1034]: E0108 22:32:35.681251    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:32:49 old-k8s-version-079759 kubelet[1034]: E0108 22:32:49.682873    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:33:04 old-k8s-version-079759 kubelet[1034]: E0108 22:33:04.680824    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:33:19 old-k8s-version-079759 kubelet[1034]: E0108 22:33:19.697061    1034 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 22:33:19 old-k8s-version-079759 kubelet[1034]: E0108 22:33:19.697155    1034 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 22:33:19 old-k8s-version-079759 kubelet[1034]: E0108 22:33:19.697220    1034 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 22:33:19 old-k8s-version-079759 kubelet[1034]: E0108 22:33:19.697274    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 08 22:33:32 old-k8s-version-079759 kubelet[1034]: E0108 22:33:32.681232    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:33:45 old-k8s-version-079759 kubelet[1034]: E0108 22:33:45.681838    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:33:59 old-k8s-version-079759 kubelet[1034]: E0108 22:33:59.680749    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:34:11 old-k8s-version-079759 kubelet[1034]: E0108 22:34:11.682650    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:34:25 old-k8s-version-079759 kubelet[1034]: E0108 22:34:25.681481    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:34:39 old-k8s-version-079759 kubelet[1034]: E0108 22:34:39.681903    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 22:34:52 old-k8s-version-079759 kubelet[1034]: E0108 22:34:52.680761    1034 pod_workers.go:191] Error syncing pod 2e600533-85b2-48aa-8f05-38ae2bb96122 ("metrics-server-74d5856cc6-sdlnw_kube-system(2e600533-85b2-48aa-8f05-38ae2bb96122)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [4adf6d6ad1709ddb11defd12bd38fb53be3ffd0014829ae23df12773621c2a7f] <==
	I0108 22:06:46.639039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 22:07:16.642423       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	I0108 22:17:09.165520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 22:17:39.175222       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5e59f9dbead2fb1101091647e756686b7c0704e2cbeb41248f0e37ac07c14ad9] <==
	I0108 22:07:17.094878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:07:17.114141       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:07:17.114405       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:07:17.133989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:07:17.135022       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48697b63-5676-4f6a-8f67-c0b173c18024", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-079759_5ab639fe-eef1-4024-8927-a3fde7e1b1d8 became leader
	I0108 22:07:17.136536       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-079759_5ab639fe-eef1-4024-8927-a3fde7e1b1d8!
	I0108 22:07:17.238017       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-079759_5ab639fe-eef1-4024-8927-a3fde7e1b1d8!
	I0108 22:17:40.069803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:17:40.083344       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:17:40.083419       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:17:57.491284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:17:57.492309       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48697b63-5676-4f6a-8f67-c0b173c18024", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-079759_3d1860c1-539c-4eb4-b1e7-aacf51850f57 became leader
	I0108 22:17:57.492430       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-079759_3d1860c1-539c-4eb4-b1e7-aacf51850f57!
	I0108 22:17:57.593228       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-079759_3d1860c1-539c-4eb4-b1e7-aacf51850f57!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079759 -n old-k8s-version-079759
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-079759 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-sdlnw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-079759 describe pod metrics-server-74d5856cc6-sdlnw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-079759 describe pod metrics-server-74d5856cc6-sdlnw: exit status 1 (84.890231ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-sdlnw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-079759 describe pod metrics-server-74d5856cc6-sdlnw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (520.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 22:27:44.574683  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 22:29:07.622443  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 22:29:44.964619  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:29:56.854141  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-08 22:35:28.761934874 +0000 UTC m=+5604.463057225
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-292054 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-292054 logs -n 25: (1.562169406s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-523607                              | cert-expiration-523607       | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343954 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | disable-driver-mounts-343954                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:09 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079759        | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC | 08 Jan 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-675668             | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-903819            | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-292054  | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC | 08 Jan 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079759             | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-675668                  | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-903819                 | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-292054       | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:26 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	| start   | -p newest-cni-154365 --memory=2200 --alsologtostderr   | newest-cni-154365            | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:35:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:35:06.501355  380658 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:35:06.501575  380658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:35:06.501586  380658 out.go:309] Setting ErrFile to fd 2...
	I0108 22:35:06.501591  380658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:35:06.501828  380658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:35:06.502674  380658 out.go:303] Setting JSON to false
	I0108 22:35:06.504126  380658 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11833,"bootTime":1704741474,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:35:06.504231  380658 start.go:138] virtualization: kvm guest
	I0108 22:35:06.507473  380658 out.go:177] * [newest-cni-154365] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:35:06.509882  380658 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:35:06.509781  380658 notify.go:220] Checking for updates...
	I0108 22:35:06.512039  380658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:35:06.513740  380658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:35:06.515232  380658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:06.516645  380658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:35:06.518041  380658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:35:06.520059  380658 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:35:06.520238  380658 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:35:06.520359  380658 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:35:06.520600  380658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:35:06.564846  380658 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 22:35:06.566955  380658 start.go:298] selected driver: kvm2
	I0108 22:35:06.566992  380658 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:35:06.567011  380658 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:35:06.568143  380658 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:35:06.568274  380658 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:35:06.585867  380658 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:35:06.585922  380658 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0108 22:35:06.585949  380658 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0108 22:35:06.586228  380658 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0108 22:35:06.586384  380658 cni.go:84] Creating CNI manager for ""
	I0108 22:35:06.586399  380658 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:35:06.586413  380658 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:35:06.586420  380658 start_flags.go:321] config:
	{Name:newest-cni-154365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-154365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:35:06.586657  380658 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:35:06.589434  380658 out.go:177] * Starting control plane node newest-cni-154365 in cluster newest-cni-154365
	I0108 22:35:06.591174  380658 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:35:06.591261  380658 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 22:35:06.591296  380658 cache.go:56] Caching tarball of preloaded images
	I0108 22:35:06.591501  380658 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:35:06.591544  380658 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 22:35:06.591688  380658 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/config.json ...
	I0108 22:35:06.591713  380658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/config.json: {Name:mk7a00387e7d74badc28ce1e19e14d16de8ddd24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:35:06.592026  380658 start.go:365] acquiring machines lock for newest-cni-154365: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:35:06.592076  380658 start.go:369] acquired machines lock for "newest-cni-154365" in 28.092µs
	I0108 22:35:06.592115  380658 start.go:93] Provisioning new machine with config: &{Name:newest-cni-154365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-154365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:35:06.592567  380658 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 22:35:06.595559  380658 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 22:35:06.595850  380658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:35:06.595941  380658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:35:06.613173  380658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35741
	I0108 22:35:06.613670  380658 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:35:06.614306  380658 main.go:141] libmachine: Using API Version  1
	I0108 22:35:06.614331  380658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:35:06.614721  380658 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:35:06.614938  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetMachineName
	I0108 22:35:06.615128  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:06.615351  380658 start.go:159] libmachine.API.Create for "newest-cni-154365" (driver="kvm2")
	I0108 22:35:06.615411  380658 client.go:168] LocalClient.Create starting
	I0108 22:35:06.615457  380658 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 22:35:06.615503  380658 main.go:141] libmachine: Decoding PEM data...
	I0108 22:35:06.615520  380658 main.go:141] libmachine: Parsing certificate...
	I0108 22:35:06.615588  380658 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 22:35:06.615608  380658 main.go:141] libmachine: Decoding PEM data...
	I0108 22:35:06.615622  380658 main.go:141] libmachine: Parsing certificate...
	I0108 22:35:06.615638  380658 main.go:141] libmachine: Running pre-create checks...
	I0108 22:35:06.615650  380658 main.go:141] libmachine: (newest-cni-154365) Calling .PreCreateCheck
	I0108 22:35:06.616003  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetConfigRaw
	I0108 22:35:06.616472  380658 main.go:141] libmachine: Creating machine...
	I0108 22:35:06.616489  380658 main.go:141] libmachine: (newest-cni-154365) Calling .Create
	I0108 22:35:06.616654  380658 main.go:141] libmachine: (newest-cni-154365) Creating KVM machine...
	I0108 22:35:06.618114  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found existing default KVM network
	I0108 22:35:06.620366  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:06.620172  380682 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025e060}
	I0108 22:35:06.626896  380658 main.go:141] libmachine: (newest-cni-154365) DBG | trying to create private KVM network mk-newest-cni-154365 192.168.39.0/24...
	I0108 22:35:06.723349  380658 main.go:141] libmachine: (newest-cni-154365) DBG | private KVM network mk-newest-cni-154365 192.168.39.0/24 created
	I0108 22:35:06.723421  380658 main.go:141] libmachine: (newest-cni-154365) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365 ...
	I0108 22:35:06.723444  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:06.723325  380682 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:06.723459  380658 main.go:141] libmachine: (newest-cni-154365) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 22:35:06.723634  380658 main.go:141] libmachine: (newest-cni-154365) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 22:35:06.982242  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:06.982083  380682 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa...
	I0108 22:35:07.057649  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:07.057492  380682 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/newest-cni-154365.rawdisk...
	I0108 22:35:07.057701  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Writing magic tar header
	I0108 22:35:07.057721  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Writing SSH key tar header
	I0108 22:35:07.057734  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:07.057691  380682 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365 ...
	I0108 22:35:07.057906  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365 (perms=drwx------)
	I0108 22:35:07.057988  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365
	I0108 22:35:07.058006  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 22:35:07.058022  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 22:35:07.058033  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 22:35:07.058044  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 22:35:07.058053  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 22:35:07.058064  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 22:35:07.058080  380658 main.go:141] libmachine: (newest-cni-154365) Creating domain...
	I0108 22:35:07.058092  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:07.058114  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 22:35:07.058126  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 22:35:07.058149  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins
	I0108 22:35:07.058180  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home
	I0108 22:35:07.058223  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Skipping /home - not owner
	I0108 22:35:07.059697  380658 main.go:141] libmachine: (newest-cni-154365) define libvirt domain using xml: 
	I0108 22:35:07.059723  380658 main.go:141] libmachine: (newest-cni-154365) <domain type='kvm'>
	I0108 22:35:07.059734  380658 main.go:141] libmachine: (newest-cni-154365)   <name>newest-cni-154365</name>
	I0108 22:35:07.059750  380658 main.go:141] libmachine: (newest-cni-154365)   <memory unit='MiB'>2200</memory>
	I0108 22:35:07.059765  380658 main.go:141] libmachine: (newest-cni-154365)   <vcpu>2</vcpu>
	I0108 22:35:07.059786  380658 main.go:141] libmachine: (newest-cni-154365)   <features>
	I0108 22:35:07.059801  380658 main.go:141] libmachine: (newest-cni-154365)     <acpi/>
	I0108 22:35:07.059812  380658 main.go:141] libmachine: (newest-cni-154365)     <apic/>
	I0108 22:35:07.059852  380658 main.go:141] libmachine: (newest-cni-154365)     <pae/>
	I0108 22:35:07.059890  380658 main.go:141] libmachine: (newest-cni-154365)     
	I0108 22:35:07.059910  380658 main.go:141] libmachine: (newest-cni-154365)   </features>
	I0108 22:35:07.059924  380658 main.go:141] libmachine: (newest-cni-154365)   <cpu mode='host-passthrough'>
	I0108 22:35:07.059938  380658 main.go:141] libmachine: (newest-cni-154365)   
	I0108 22:35:07.059951  380658 main.go:141] libmachine: (newest-cni-154365)   </cpu>
	I0108 22:35:07.059966  380658 main.go:141] libmachine: (newest-cni-154365)   <os>
	I0108 22:35:07.059987  380658 main.go:141] libmachine: (newest-cni-154365)     <type>hvm</type>
	I0108 22:35:07.060002  380658 main.go:141] libmachine: (newest-cni-154365)     <boot dev='cdrom'/>
	I0108 22:35:07.060011  380658 main.go:141] libmachine: (newest-cni-154365)     <boot dev='hd'/>
	I0108 22:35:07.060022  380658 main.go:141] libmachine: (newest-cni-154365)     <bootmenu enable='no'/>
	I0108 22:35:07.060035  380658 main.go:141] libmachine: (newest-cni-154365)   </os>
	I0108 22:35:07.060055  380658 main.go:141] libmachine: (newest-cni-154365)   <devices>
	I0108 22:35:07.060078  380658 main.go:141] libmachine: (newest-cni-154365)     <disk type='file' device='cdrom'>
	I0108 22:35:07.060095  380658 main.go:141] libmachine: (newest-cni-154365)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/boot2docker.iso'/>
	I0108 22:35:07.060119  380658 main.go:141] libmachine: (newest-cni-154365)       <target dev='hdc' bus='scsi'/>
	I0108 22:35:07.060130  380658 main.go:141] libmachine: (newest-cni-154365)       <readonly/>
	I0108 22:35:07.060146  380658 main.go:141] libmachine: (newest-cni-154365)     </disk>
	I0108 22:35:07.060161  380658 main.go:141] libmachine: (newest-cni-154365)     <disk type='file' device='disk'>
	I0108 22:35:07.060176  380658 main.go:141] libmachine: (newest-cni-154365)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 22:35:07.060217  380658 main.go:141] libmachine: (newest-cni-154365)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/newest-cni-154365.rawdisk'/>
	I0108 22:35:07.060240  380658 main.go:141] libmachine: (newest-cni-154365)       <target dev='hda' bus='virtio'/>
	I0108 22:35:07.060251  380658 main.go:141] libmachine: (newest-cni-154365)     </disk>
	I0108 22:35:07.060256  380658 main.go:141] libmachine: (newest-cni-154365)     <interface type='network'>
	I0108 22:35:07.060292  380658 main.go:141] libmachine: (newest-cni-154365)       <source network='mk-newest-cni-154365'/>
	I0108 22:35:07.060315  380658 main.go:141] libmachine: (newest-cni-154365)       <model type='virtio'/>
	I0108 22:35:07.060329  380658 main.go:141] libmachine: (newest-cni-154365)     </interface>
	I0108 22:35:07.060339  380658 main.go:141] libmachine: (newest-cni-154365)     <interface type='network'>
	I0108 22:35:07.060347  380658 main.go:141] libmachine: (newest-cni-154365)       <source network='default'/>
	I0108 22:35:07.060359  380658 main.go:141] libmachine: (newest-cni-154365)       <model type='virtio'/>
	I0108 22:35:07.060374  380658 main.go:141] libmachine: (newest-cni-154365)     </interface>
	I0108 22:35:07.060390  380658 main.go:141] libmachine: (newest-cni-154365)     <serial type='pty'>
	I0108 22:35:07.060400  380658 main.go:141] libmachine: (newest-cni-154365)       <target port='0'/>
	I0108 22:35:07.060409  380658 main.go:141] libmachine: (newest-cni-154365)     </serial>
	I0108 22:35:07.060422  380658 main.go:141] libmachine: (newest-cni-154365)     <console type='pty'>
	I0108 22:35:07.060430  380658 main.go:141] libmachine: (newest-cni-154365)       <target type='serial' port='0'/>
	I0108 22:35:07.060465  380658 main.go:141] libmachine: (newest-cni-154365)     </console>
	I0108 22:35:07.060489  380658 main.go:141] libmachine: (newest-cni-154365)     <rng model='virtio'>
	I0108 22:35:07.060506  380658 main.go:141] libmachine: (newest-cni-154365)       <backend model='random'>/dev/random</backend>
	I0108 22:35:07.060517  380658 main.go:141] libmachine: (newest-cni-154365)     </rng>
	I0108 22:35:07.060536  380658 main.go:141] libmachine: (newest-cni-154365)     
	I0108 22:35:07.060545  380658 main.go:141] libmachine: (newest-cni-154365)     
	I0108 22:35:07.060563  380658 main.go:141] libmachine: (newest-cni-154365)   </devices>
	I0108 22:35:07.060577  380658 main.go:141] libmachine: (newest-cni-154365) </domain>
	I0108 22:35:07.060590  380658 main.go:141] libmachine: (newest-cni-154365) 
	I0108 22:35:07.065658  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:c3:e1:8b in network default
	I0108 22:35:07.066342  380658 main.go:141] libmachine: (newest-cni-154365) Ensuring networks are active...
	I0108 22:35:07.066367  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:07.067042  380658 main.go:141] libmachine: (newest-cni-154365) Ensuring network default is active
	I0108 22:35:07.067436  380658 main.go:141] libmachine: (newest-cni-154365) Ensuring network mk-newest-cni-154365 is active
	I0108 22:35:07.068066  380658 main.go:141] libmachine: (newest-cni-154365) Getting domain xml...
	I0108 22:35:07.068846  380658 main.go:141] libmachine: (newest-cni-154365) Creating domain...
	I0108 22:35:08.506659  380658 main.go:141] libmachine: (newest-cni-154365) Waiting to get IP...
	I0108 22:35:08.507673  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:08.508205  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:08.508312  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:08.508228  380682 retry.go:31] will retry after 238.401301ms: waiting for machine to come up
	I0108 22:35:08.749140  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:08.749796  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:08.749822  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:08.749742  380682 retry.go:31] will retry after 309.542396ms: waiting for machine to come up
	I0108 22:35:09.061535  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:09.062125  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:09.062163  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:09.062075  380682 retry.go:31] will retry after 393.893029ms: waiting for machine to come up
	I0108 22:35:09.457677  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:09.458303  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:09.458334  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:09.458254  380682 retry.go:31] will retry after 425.719934ms: waiting for machine to come up
	I0108 22:35:09.885555  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:09.885974  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:09.886000  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:09.885933  380682 retry.go:31] will retry after 483.756468ms: waiting for machine to come up
	I0108 22:35:10.371798  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:10.372301  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:10.372331  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:10.372259  380682 retry.go:31] will retry after 910.498928ms: waiting for machine to come up
	I0108 22:35:11.284344  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:11.284957  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:11.284994  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:11.284899  380682 retry.go:31] will retry after 1.093353625s: waiting for machine to come up
	I0108 22:35:12.380043  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:12.380759  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:12.380799  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:12.380670  380682 retry.go:31] will retry after 1.460216822s: waiting for machine to come up
	I0108 22:35:13.842429  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:13.842995  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:13.843030  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:13.842952  380682 retry.go:31] will retry after 1.430170501s: waiting for machine to come up
	I0108 22:35:15.275789  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:15.276323  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:15.276362  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:15.276277  380682 retry.go:31] will retry after 1.621041797s: waiting for machine to come up
	I0108 22:35:16.899140  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:16.899761  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:16.899791  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:16.899708  380682 retry.go:31] will retry after 2.701894127s: waiting for machine to come up
	I0108 22:35:19.605036  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:19.605624  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:19.605652  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:19.605586  380682 retry.go:31] will retry after 3.62067067s: waiting for machine to come up
	I0108 22:35:23.227405  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:23.227879  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:23.227924  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:23.227827  380682 retry.go:31] will retry after 3.172675173s: waiting for machine to come up
	I0108 22:35:26.402974  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:26.403451  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:26.403490  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:26.403345  380682 retry.go:31] will retry after 5.398315404s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:16:05 UTC, ends at Mon 2024-01-08 22:35:29 UTC. --
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.665508235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cc4d402f-7c1b-416f-8924-113a26d28705 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.667263770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c8098577-11b4-407a-9cbe-e3ca31a3326d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.667850940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753329667836147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c8098577-11b4-407a-9cbe-e3ca31a3326d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.668354514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=07498b49-2bff-4e1d-8d58-2bb8c44c438a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.668557672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=07498b49-2bff-4e1d-8d58-2bb8c44c438a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.668734618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c,PodSandboxId:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752534111353778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{io.kubernetes.container.hash: c3c57d92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6,PodSandboxId:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752533540374408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,},Annotations:map[string]string{io.kubernetes.container.hash: 69bd94d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570,PodSandboxId:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752532144310365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8b267c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf,PodSandboxId:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752507093529577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba6
2654850d16abea3,},Annotations:map[string]string{io.kubernetes.container.hash: af30e0f5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4,PodSandboxId:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752507211657648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b97eb78da9d1b4f
d8649df06c7ca7c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d,PodSandboxId:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752507065623484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348,PodSandboxId:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752506813371814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,},Annotations:map[string]string{io.kubernetes.container.hash: c286a60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=07498b49-2bff-4e1d-8d58-2bb8c44c438a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.720322887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c681c215-fb48-4a20-acc7-ad53ed40ab06 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.720579380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c681c215-fb48-4a20-acc7-ad53ed40ab06 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.722182601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5295eff7-9c33-44d2-9b6d-bdbd0c6637d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.722762759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753329722744221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5295eff7-9c33-44d2-9b6d-bdbd0c6637d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.723648291Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e5345ab-c145-4b14-9fba-359026841fe0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.723964835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e5345ab-c145-4b14-9fba-359026841fe0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.724343174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c,PodSandboxId:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752534111353778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{io.kubernetes.container.hash: c3c57d92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6,PodSandboxId:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752533540374408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,},Annotations:map[string]string{io.kubernetes.container.hash: 69bd94d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570,PodSandboxId:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752532144310365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8b267c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf,PodSandboxId:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752507093529577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba6
2654850d16abea3,},Annotations:map[string]string{io.kubernetes.container.hash: af30e0f5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4,PodSandboxId:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752507211657648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b97eb78da9d1b4f
d8649df06c7ca7c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d,PodSandboxId:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752507065623484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348,PodSandboxId:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752506813371814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,},Annotations:map[string]string{io.kubernetes.container.hash: c286a60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e5345ab-c145-4b14-9fba-359026841fe0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.766505019Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=2af1ecd1-038e-408e-a3a1-dda8e8ca8c27 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.766857323Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:05c2430d-d84e-415e-83b3-c32e7635fe74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752532900895306,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-08T22:22:12.566040993Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c5f88cccb1346a6df7e5cfb9443773baf5ca1be7f0fa0d0765bdc4f044af87a,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-jm9lg,Uid:b94afab5-f573-4ed1-bc29-64eb8e90c574,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752532632279706,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-jm9lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b94afab5-f573-4ed1-bc29-6
4eb8e90c574,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:22:12.277357829Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-r27zw,Uid:c82dae88-118a-4e13-a714-1240d48dfc4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752529455862567,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:22:09.101866660Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&PodSandboxMetadata{Name:kube-proxy-bwmkb,Uid:c01f0fed-4a5f-46
7e-a4c0-8d4f2bdb12a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752529194215608,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:22:08.845901131Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-292054,Uid:25b97eb78da9d1b4fd8649df06c7ca7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752506086621427,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 25b97eb78da9d1b4fd8649df06c7ca7c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 25b97eb78da9d1b4fd8649df06c7ca7c,kubernetes.io/config.seen: 2024-01-08T22:21:45.473222043Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-292054,Uid:27ee4b9df4c37f95e2011b8bd21f25a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752506066020106,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 27ee4b9df4c37f95e2011b8bd21f25a2,kubernetes.io/config.seen: 2024-01-08T22:21:45.473218997Z,kubernetes.io/config.source: file,},
RuntimeHandler:,},&PodSandbox{Id:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-292054,Uid:e3295cbb0d1303870eed006ab815b2a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752506044190262,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.18:8444,kubernetes.io/config.hash: e3295cbb0d1303870eed006ab815b2a8,kubernetes.io/config.seen: 2024-01-08T22:21:45.473217199Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-292054,Uid:1e73bf885258e1ba6
2654850d16abea3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752506021008401,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba62654850d16abea3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.18:2379,kubernetes.io/config.hash: 1e73bf885258e1ba62654850d16abea3,kubernetes.io/config.seen: 2024-01-08T22:21:45.473211173Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=2af1ecd1-038e-408e-a3a1-dda8e8ca8c27 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.767904345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=847ba9cf-e5d3-410a-a90e-f2497974a070 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.767981296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=847ba9cf-e5d3-410a-a90e-f2497974a070 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.768309083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c,PodSandboxId:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752534111353778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{io.kubernetes.container.hash: c3c57d92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6,PodSandboxId:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752533540374408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,},Annotations:map[string]string{io.kubernetes.container.hash: 69bd94d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570,PodSandboxId:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752532144310365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8b267c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf,PodSandboxId:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752507093529577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba6
2654850d16abea3,},Annotations:map[string]string{io.kubernetes.container.hash: af30e0f5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4,PodSandboxId:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752507211657648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b97eb78da9d1b4f
d8649df06c7ca7c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d,PodSandboxId:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752507065623484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348,PodSandboxId:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752506813371814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,},Annotations:map[string]string{io.kubernetes.container.hash: c286a60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=847ba9cf-e5d3-410a-a90e-f2497974a070 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.787368219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=462c42af-906a-4e8a-9230-eb75e05de917 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.787513352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=462c42af-906a-4e8a-9230-eb75e05de917 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.789038161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6ea7b519-9b3f-43e9-bd53-1cead497e2ce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.789539471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753329789492210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6ea7b519-9b3f-43e9-bd53-1cead497e2ce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.790072031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d7531bc0-a345-49fe-8a0d-f30c79da009b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.790142993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d7531bc0-a345-49fe-8a0d-f30c79da009b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:29 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:35:29.790324101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c,PodSandboxId:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752534111353778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{io.kubernetes.container.hash: c3c57d92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6,PodSandboxId:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752533540374408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,},Annotations:map[string]string{io.kubernetes.container.hash: 69bd94d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570,PodSandboxId:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752532144310365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8b267c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf,PodSandboxId:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752507093529577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba6
2654850d16abea3,},Annotations:map[string]string{io.kubernetes.container.hash: af30e0f5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4,PodSandboxId:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752507211657648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b97eb78da9d1b4f
d8649df06c7ca7c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d,PodSandboxId:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752507065623484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348,PodSandboxId:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752506813371814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,},Annotations:map[string]string{io.kubernetes.container.hash: c286a60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d7531bc0-a345-49fe-8a0d-f30c79da009b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37ec1a7ab6aa1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   6b51dd8a2a2b8       storage-provisioner
	6c02f8fe98e2f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   11be2fc680906       kube-proxy-bwmkb
	a28f303c4e97b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   cf4667045a70d       coredns-5dd5756b68-r27zw
	87f8525af63e6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   191b859667825       kube-scheduler-default-k8s-diff-port-292054
	bcf8add63ad3e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   3f1a8cb24bd1c       etcd-default-k8s-diff-port-292054
	3e507ce6d6a23       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   98f3c0e3a1bad       kube-controller-manager-default-k8s-diff-port-292054
	491ed169ad2f7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   cefcbd6c3f309       kube-apiserver-default-k8s-diff-port-292054
	
	
	==> coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56909 - 15984 "HINFO IN 1941820745804244463.1315308648900132827. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025653962s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-292054
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-292054
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=default-k8s-diff-port-292054
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_21_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:21:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-292054
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:35:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:32:29 +0000   Mon, 08 Jan 2024 22:21:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:32:29 +0000   Mon, 08 Jan 2024 22:21:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:32:29 +0000   Mon, 08 Jan 2024 22:21:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:32:29 +0000   Mon, 08 Jan 2024 22:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.18
	  Hostname:    default-k8s-diff-port-292054
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 00f023c105c24aeda2854315360f800d
	  System UUID:                00f023c1-05c2-4aed-a285-4315360f800d
	  Boot ID:                    fec1a090-c5ed-42d8-b7f7-12fa03a91aa5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-r27zw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-292054                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-292054             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-292054    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-bwmkb                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-292054             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-jm9lg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-292054 event: Registered Node default-k8s-diff-port-292054 in Controller
	
	
	==> dmesg <==
	[Jan 8 22:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074352] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan 8 22:16] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.746837] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149245] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.649926] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.392742] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.151355] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.209600] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.141618] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[  +0.362096] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[ +18.964784] systemd-fstab-generator[930]: Ignoring "noauto" for root device
	[ +21.812844] kauditd_printk_skb: 29 callbacks suppressed
	[Jan 8 22:21] systemd-fstab-generator[3542]: Ignoring "noauto" for root device
	[ +10.861340] systemd-fstab-generator[3864]: Ignoring "noauto" for root device
	[Jan 8 22:22] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.113569] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] <==
	{"level":"info","ts":"2024-01-08T22:21:49.293897Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.18:2380"}
	{"level":"info","ts":"2024-01-08T22:21:49.294069Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.18:2380"}
	{"level":"info","ts":"2024-01-08T22:21:49.295068Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e3895747abc9dda3","initial-advertise-peer-urls":["https://192.168.50.18:2380"],"listen-peer-urls":["https://192.168.50.18:2380"],"advertise-client-urls":["https://192.168.50.18:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.18:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T22:21:49.295136Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T22:21:49.750546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3895747abc9dda3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T22:21:49.750708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3895747abc9dda3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T22:21:49.750764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3895747abc9dda3 received MsgPreVoteResp from e3895747abc9dda3 at term 1"}
	{"level":"info","ts":"2024-01-08T22:21:49.750799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3895747abc9dda3 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T22:21:49.750841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3895747abc9dda3 received MsgVoteResp from e3895747abc9dda3 at term 2"}
	{"level":"info","ts":"2024-01-08T22:21:49.750869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3895747abc9dda3 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T22:21:49.750895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e3895747abc9dda3 elected leader e3895747abc9dda3 at term 2"}
	{"level":"info","ts":"2024-01-08T22:21:49.75577Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e3895747abc9dda3","local-member-attributes":"{Name:default-k8s-diff-port-292054 ClientURLs:[https://192.168.50.18:2379]}","request-path":"/0/members/e3895747abc9dda3/attributes","cluster-id":"3c16f1003b534ab0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T22:21:49.75591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:21:49.757213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T22:21:49.757402Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:21:49.757879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:21:49.761152Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.18:2379"}
	{"level":"info","ts":"2024-01-08T22:21:49.767549Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T22:21:49.767706Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T22:21:49.812861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3c16f1003b534ab0","local-member-id":"e3895747abc9dda3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:21:49.813036Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:21:49.816513Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:31:49.842539Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-01-08T22:31:49.845527Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":722,"took":"2.535269ms","hash":103817928}
	{"level":"info","ts":"2024-01-08T22:31:49.845612Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":103817928,"revision":722,"compact-revision":-1}
	
	
	==> kernel <==
	 22:35:30 up 19 min,  0 users,  load average: 0.22, 0.29, 0.27
	Linux default-k8s-diff-port-292054 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] <==
	I0108 22:31:52.158075       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:31:53.159377       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:53.159803       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:31:53.159864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:31:53.159377       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:53.160076       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:31:53.161351       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:32:52.007181       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:32:53.160150       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:32:53.160241       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:32:53.160253       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:32:53.161553       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:32:53.161662       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:32:53.161719       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:33:52.007219       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:34:52.006593       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:34:53.160357       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:34:53.160733       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:34:53.160786       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:34:53.162930       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:34:53.163059       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:34:53.163087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] <==
	I0108 22:29:38.619974       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:30:08.266910       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:30:08.630692       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:30:38.273979       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:30:38.641613       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:08.281791       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:08.651621       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:38.290612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:38.664565       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:32:08.308079       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:08.674923       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:32:38.315230       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:38.685843       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:33:08.321238       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:08.700256       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 22:33:21.427120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="280.556µs"
	I0108 22:33:33.421725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.787µs"
	E0108 22:33:38.329259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:38.710653       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:08.339817       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:08.723840       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:38.347173       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:38.735175       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:35:08.357381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:35:08.758907       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] <==
	I0108 22:22:14.112794       1 server_others.go:69] "Using iptables proxy"
	I0108 22:22:14.168091       1 node.go:141] Successfully retrieved node IP: 192.168.50.18
	I0108 22:22:14.285633       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 22:22:14.285681       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:22:14.290881       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:22:14.292798       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:22:14.295558       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:22:14.295618       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:22:14.299022       1 config.go:188] "Starting service config controller"
	I0108 22:22:14.299956       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:22:14.301948       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:22:14.302136       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:22:14.302561       1 config.go:315] "Starting node config controller"
	I0108 22:22:14.302699       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:22:14.402937       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:22:14.403033       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:22:14.403102       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] <==
	W0108 22:21:53.159525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:21:53.159601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:21:53.322769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:21:53.322821       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:21:53.349670       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:21:53.349743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:21:53.353417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:53.353630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:53.367494       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:21:53.367548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:21:53.397386       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:21:53.397597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:21:53.453807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:21:53.453906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 22:21:53.512738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:53.512791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:53.581662       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:21:53.581766       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:21:53.638788       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:21:53.638909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 22:21:53.650904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:53.651042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:53.755294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:21:53.755406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0108 22:21:55.674207       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:16:05 UTC, ends at Mon 2024-01-08 22:35:30 UTC. --
	Jan 08 22:32:56 default-k8s-diff-port-292054 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:32:56 default-k8s-diff-port-292054 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:33:07 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:33:07.417006    3871 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 22:33:07 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:33:07.417080    3871 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 22:33:07 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:33:07.417380    3871 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9vc9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-jm9lg_kube-system(b94afab5-f573-4ed1-bc29-64eb8e90c574): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 22:33:07 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:33:07.417526    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:33:21 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:33:21.402312    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:33:33 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:33:33.401650    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:33:47 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:33:47.402222    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:33:56 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:33:56.494284    3871 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:33:56 default-k8s-diff-port-292054 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:33:56 default-k8s-diff-port-292054 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:33:56 default-k8s-diff-port-292054 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:34:00 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:00.409521    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:14 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:14.403069    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:25 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:25.406975    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:37 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:37.401849    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:51 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:51.401874    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:56 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:56.495701    3871 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:34:56 default-k8s-diff-port-292054 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:34:56 default-k8s-diff-port-292054 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:34:56 default-k8s-diff-port-292054 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:35:06 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:06.404845    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:35:19 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:19.402182    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:35:30 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:30.406565    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	
	
	==> storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] <==
	I0108 22:22:14.318703       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:22:14.345088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:22:14.345204       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:22:14.357415       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:22:14.358657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-292054_be176cbb-a878-4179-b11c-1e8615a95ccf!
	I0108 22:22:14.363557       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56fe9315-e25a-4bc3-80aa-74f0ea93b554", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-292054_be176cbb-a878-4179-b11c-1e8615a95ccf became leader
	I0108 22:22:14.461297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-292054_be176cbb-a878-4179-b11c-1e8615a95ccf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-292054 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-jm9lg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-292054 describe pod metrics-server-57f55c9bc5-jm9lg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-292054 describe pod metrics-server-57f55c9bc5-jm9lg: exit status 1 (99.075677ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-jm9lg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-292054 describe pod metrics-server-57f55c9bc5-jm9lg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (319.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 22:32:44.574504  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675668 -n no-preload-675668
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-08 22:35:29.15821011 +0000 UTC m=+5604.859332462
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-675668 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-675668 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.873µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-675668 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-675668 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-675668 logs -n 25: (1.561902832s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-415665                                        | pause-415665                 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-523607                              | cert-expiration-523607       | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343954 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | disable-driver-mounts-343954                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:09 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079759        | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC | 08 Jan 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-675668             | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-903819            | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-292054  | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC | 08 Jan 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079759             | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-675668                  | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-903819                 | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-292054       | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:26 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	| start   | -p newest-cni-154365 --memory=2200 --alsologtostderr   | newest-cni-154365            | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:35:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:35:06.501355  380658 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:35:06.501575  380658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:35:06.501586  380658 out.go:309] Setting ErrFile to fd 2...
	I0108 22:35:06.501591  380658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:35:06.501828  380658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:35:06.502674  380658 out.go:303] Setting JSON to false
	I0108 22:35:06.504126  380658 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11833,"bootTime":1704741474,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:35:06.504231  380658 start.go:138] virtualization: kvm guest
	I0108 22:35:06.507473  380658 out.go:177] * [newest-cni-154365] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:35:06.509882  380658 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:35:06.509781  380658 notify.go:220] Checking for updates...
	I0108 22:35:06.512039  380658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:35:06.513740  380658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:35:06.515232  380658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:06.516645  380658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:35:06.518041  380658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:35:06.520059  380658 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:35:06.520238  380658 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:35:06.520359  380658 config.go:182] Loaded profile config "no-preload-675668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:35:06.520600  380658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:35:06.564846  380658 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 22:35:06.566955  380658 start.go:298] selected driver: kvm2
	I0108 22:35:06.566992  380658 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:35:06.567011  380658 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:35:06.568143  380658 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:35:06.568274  380658 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:35:06.585867  380658 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:35:06.585922  380658 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0108 22:35:06.585949  380658 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0108 22:35:06.586228  380658 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0108 22:35:06.586384  380658 cni.go:84] Creating CNI manager for ""
	I0108 22:35:06.586399  380658 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:35:06.586413  380658 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:35:06.586420  380658 start_flags.go:321] config:
	{Name:newest-cni-154365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-154365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:35:06.586657  380658 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:35:06.589434  380658 out.go:177] * Starting control plane node newest-cni-154365 in cluster newest-cni-154365
	I0108 22:35:06.591174  380658 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:35:06.591261  380658 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 22:35:06.591296  380658 cache.go:56] Caching tarball of preloaded images
	I0108 22:35:06.591501  380658 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:35:06.591544  380658 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 22:35:06.591688  380658 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/config.json ...
	I0108 22:35:06.591713  380658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/config.json: {Name:mk7a00387e7d74badc28ce1e19e14d16de8ddd24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:35:06.592026  380658 start.go:365] acquiring machines lock for newest-cni-154365: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:35:06.592076  380658 start.go:369] acquired machines lock for "newest-cni-154365" in 28.092µs
	I0108 22:35:06.592115  380658 start.go:93] Provisioning new machine with config: &{Name:newest-cni-154365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-154365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:35:06.592567  380658 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 22:35:06.595559  380658 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 22:35:06.595850  380658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:35:06.595941  380658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:35:06.613173  380658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35741
	I0108 22:35:06.613670  380658 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:35:06.614306  380658 main.go:141] libmachine: Using API Version  1
	I0108 22:35:06.614331  380658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:35:06.614721  380658 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:35:06.614938  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetMachineName
	I0108 22:35:06.615128  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:06.615351  380658 start.go:159] libmachine.API.Create for "newest-cni-154365" (driver="kvm2")
	I0108 22:35:06.615411  380658 client.go:168] LocalClient.Create starting
	I0108 22:35:06.615457  380658 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 22:35:06.615503  380658 main.go:141] libmachine: Decoding PEM data...
	I0108 22:35:06.615520  380658 main.go:141] libmachine: Parsing certificate...
	I0108 22:35:06.615588  380658 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 22:35:06.615608  380658 main.go:141] libmachine: Decoding PEM data...
	I0108 22:35:06.615622  380658 main.go:141] libmachine: Parsing certificate...
	I0108 22:35:06.615638  380658 main.go:141] libmachine: Running pre-create checks...
	I0108 22:35:06.615650  380658 main.go:141] libmachine: (newest-cni-154365) Calling .PreCreateCheck
	I0108 22:35:06.616003  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetConfigRaw
	I0108 22:35:06.616472  380658 main.go:141] libmachine: Creating machine...
	I0108 22:35:06.616489  380658 main.go:141] libmachine: (newest-cni-154365) Calling .Create
	I0108 22:35:06.616654  380658 main.go:141] libmachine: (newest-cni-154365) Creating KVM machine...
	I0108 22:35:06.618114  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found existing default KVM network
	I0108 22:35:06.620366  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:06.620172  380682 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025e060}
	I0108 22:35:06.626896  380658 main.go:141] libmachine: (newest-cni-154365) DBG | trying to create private KVM network mk-newest-cni-154365 192.168.39.0/24...
	I0108 22:35:06.723349  380658 main.go:141] libmachine: (newest-cni-154365) DBG | private KVM network mk-newest-cni-154365 192.168.39.0/24 created
	I0108 22:35:06.723421  380658 main.go:141] libmachine: (newest-cni-154365) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365 ...
	I0108 22:35:06.723444  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:06.723325  380682 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:06.723459  380658 main.go:141] libmachine: (newest-cni-154365) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 22:35:06.723634  380658 main.go:141] libmachine: (newest-cni-154365) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 22:35:06.982242  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:06.982083  380682 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa...
	I0108 22:35:07.057649  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:07.057492  380682 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/newest-cni-154365.rawdisk...
	I0108 22:35:07.057701  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Writing magic tar header
	I0108 22:35:07.057721  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Writing SSH key tar header
	I0108 22:35:07.057734  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:07.057691  380682 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365 ...
	I0108 22:35:07.057906  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365 (perms=drwx------)
	I0108 22:35:07.057988  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365
	I0108 22:35:07.058006  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 22:35:07.058022  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 22:35:07.058033  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 22:35:07.058044  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 22:35:07.058053  380658 main.go:141] libmachine: (newest-cni-154365) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 22:35:07.058064  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 22:35:07.058080  380658 main.go:141] libmachine: (newest-cni-154365) Creating domain...
	I0108 22:35:07.058092  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:07.058114  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 22:35:07.058126  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 22:35:07.058149  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home/jenkins
	I0108 22:35:07.058180  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Checking permissions on dir: /home
	I0108 22:35:07.058223  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Skipping /home - not owner
	I0108 22:35:07.059697  380658 main.go:141] libmachine: (newest-cni-154365) define libvirt domain using xml: 
	I0108 22:35:07.059723  380658 main.go:141] libmachine: (newest-cni-154365) <domain type='kvm'>
	I0108 22:35:07.059734  380658 main.go:141] libmachine: (newest-cni-154365)   <name>newest-cni-154365</name>
	I0108 22:35:07.059750  380658 main.go:141] libmachine: (newest-cni-154365)   <memory unit='MiB'>2200</memory>
	I0108 22:35:07.059765  380658 main.go:141] libmachine: (newest-cni-154365)   <vcpu>2</vcpu>
	I0108 22:35:07.059786  380658 main.go:141] libmachine: (newest-cni-154365)   <features>
	I0108 22:35:07.059801  380658 main.go:141] libmachine: (newest-cni-154365)     <acpi/>
	I0108 22:35:07.059812  380658 main.go:141] libmachine: (newest-cni-154365)     <apic/>
	I0108 22:35:07.059852  380658 main.go:141] libmachine: (newest-cni-154365)     <pae/>
	I0108 22:35:07.059890  380658 main.go:141] libmachine: (newest-cni-154365)     
	I0108 22:35:07.059910  380658 main.go:141] libmachine: (newest-cni-154365)   </features>
	I0108 22:35:07.059924  380658 main.go:141] libmachine: (newest-cni-154365)   <cpu mode='host-passthrough'>
	I0108 22:35:07.059938  380658 main.go:141] libmachine: (newest-cni-154365)   
	I0108 22:35:07.059951  380658 main.go:141] libmachine: (newest-cni-154365)   </cpu>
	I0108 22:35:07.059966  380658 main.go:141] libmachine: (newest-cni-154365)   <os>
	I0108 22:35:07.059987  380658 main.go:141] libmachine: (newest-cni-154365)     <type>hvm</type>
	I0108 22:35:07.060002  380658 main.go:141] libmachine: (newest-cni-154365)     <boot dev='cdrom'/>
	I0108 22:35:07.060011  380658 main.go:141] libmachine: (newest-cni-154365)     <boot dev='hd'/>
	I0108 22:35:07.060022  380658 main.go:141] libmachine: (newest-cni-154365)     <bootmenu enable='no'/>
	I0108 22:35:07.060035  380658 main.go:141] libmachine: (newest-cni-154365)   </os>
	I0108 22:35:07.060055  380658 main.go:141] libmachine: (newest-cni-154365)   <devices>
	I0108 22:35:07.060078  380658 main.go:141] libmachine: (newest-cni-154365)     <disk type='file' device='cdrom'>
	I0108 22:35:07.060095  380658 main.go:141] libmachine: (newest-cni-154365)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/boot2docker.iso'/>
	I0108 22:35:07.060119  380658 main.go:141] libmachine: (newest-cni-154365)       <target dev='hdc' bus='scsi'/>
	I0108 22:35:07.060130  380658 main.go:141] libmachine: (newest-cni-154365)       <readonly/>
	I0108 22:35:07.060146  380658 main.go:141] libmachine: (newest-cni-154365)     </disk>
	I0108 22:35:07.060161  380658 main.go:141] libmachine: (newest-cni-154365)     <disk type='file' device='disk'>
	I0108 22:35:07.060176  380658 main.go:141] libmachine: (newest-cni-154365)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 22:35:07.060217  380658 main.go:141] libmachine: (newest-cni-154365)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/newest-cni-154365.rawdisk'/>
	I0108 22:35:07.060240  380658 main.go:141] libmachine: (newest-cni-154365)       <target dev='hda' bus='virtio'/>
	I0108 22:35:07.060251  380658 main.go:141] libmachine: (newest-cni-154365)     </disk>
	I0108 22:35:07.060256  380658 main.go:141] libmachine: (newest-cni-154365)     <interface type='network'>
	I0108 22:35:07.060292  380658 main.go:141] libmachine: (newest-cni-154365)       <source network='mk-newest-cni-154365'/>
	I0108 22:35:07.060315  380658 main.go:141] libmachine: (newest-cni-154365)       <model type='virtio'/>
	I0108 22:35:07.060329  380658 main.go:141] libmachine: (newest-cni-154365)     </interface>
	I0108 22:35:07.060339  380658 main.go:141] libmachine: (newest-cni-154365)     <interface type='network'>
	I0108 22:35:07.060347  380658 main.go:141] libmachine: (newest-cni-154365)       <source network='default'/>
	I0108 22:35:07.060359  380658 main.go:141] libmachine: (newest-cni-154365)       <model type='virtio'/>
	I0108 22:35:07.060374  380658 main.go:141] libmachine: (newest-cni-154365)     </interface>
	I0108 22:35:07.060390  380658 main.go:141] libmachine: (newest-cni-154365)     <serial type='pty'>
	I0108 22:35:07.060400  380658 main.go:141] libmachine: (newest-cni-154365)       <target port='0'/>
	I0108 22:35:07.060409  380658 main.go:141] libmachine: (newest-cni-154365)     </serial>
	I0108 22:35:07.060422  380658 main.go:141] libmachine: (newest-cni-154365)     <console type='pty'>
	I0108 22:35:07.060430  380658 main.go:141] libmachine: (newest-cni-154365)       <target type='serial' port='0'/>
	I0108 22:35:07.060465  380658 main.go:141] libmachine: (newest-cni-154365)     </console>
	I0108 22:35:07.060489  380658 main.go:141] libmachine: (newest-cni-154365)     <rng model='virtio'>
	I0108 22:35:07.060506  380658 main.go:141] libmachine: (newest-cni-154365)       <backend model='random'>/dev/random</backend>
	I0108 22:35:07.060517  380658 main.go:141] libmachine: (newest-cni-154365)     </rng>
	I0108 22:35:07.060536  380658 main.go:141] libmachine: (newest-cni-154365)     
	I0108 22:35:07.060545  380658 main.go:141] libmachine: (newest-cni-154365)     
	I0108 22:35:07.060563  380658 main.go:141] libmachine: (newest-cni-154365)   </devices>
	I0108 22:35:07.060577  380658 main.go:141] libmachine: (newest-cni-154365) </domain>
	I0108 22:35:07.060590  380658 main.go:141] libmachine: (newest-cni-154365) 
	I0108 22:35:07.065658  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:c3:e1:8b in network default
	I0108 22:35:07.066342  380658 main.go:141] libmachine: (newest-cni-154365) Ensuring networks are active...
	I0108 22:35:07.066367  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:07.067042  380658 main.go:141] libmachine: (newest-cni-154365) Ensuring network default is active
	I0108 22:35:07.067436  380658 main.go:141] libmachine: (newest-cni-154365) Ensuring network mk-newest-cni-154365 is active
	I0108 22:35:07.068066  380658 main.go:141] libmachine: (newest-cni-154365) Getting domain xml...
	I0108 22:35:07.068846  380658 main.go:141] libmachine: (newest-cni-154365) Creating domain...
	I0108 22:35:08.506659  380658 main.go:141] libmachine: (newest-cni-154365) Waiting to get IP...
	I0108 22:35:08.507673  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:08.508205  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:08.508312  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:08.508228  380682 retry.go:31] will retry after 238.401301ms: waiting for machine to come up
	I0108 22:35:08.749140  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:08.749796  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:08.749822  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:08.749742  380682 retry.go:31] will retry after 309.542396ms: waiting for machine to come up
	I0108 22:35:09.061535  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:09.062125  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:09.062163  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:09.062075  380682 retry.go:31] will retry after 393.893029ms: waiting for machine to come up
	I0108 22:35:09.457677  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:09.458303  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:09.458334  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:09.458254  380682 retry.go:31] will retry after 425.719934ms: waiting for machine to come up
	I0108 22:35:09.885555  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:09.885974  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:09.886000  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:09.885933  380682 retry.go:31] will retry after 483.756468ms: waiting for machine to come up
	I0108 22:35:10.371798  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:10.372301  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:10.372331  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:10.372259  380682 retry.go:31] will retry after 910.498928ms: waiting for machine to come up
	I0108 22:35:11.284344  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:11.284957  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:11.284994  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:11.284899  380682 retry.go:31] will retry after 1.093353625s: waiting for machine to come up
	I0108 22:35:12.380043  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:12.380759  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:12.380799  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:12.380670  380682 retry.go:31] will retry after 1.460216822s: waiting for machine to come up
	I0108 22:35:13.842429  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:13.842995  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:13.843030  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:13.842952  380682 retry.go:31] will retry after 1.430170501s: waiting for machine to come up
	I0108 22:35:15.275789  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:15.276323  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:15.276362  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:15.276277  380682 retry.go:31] will retry after 1.621041797s: waiting for machine to come up
	I0108 22:35:16.899140  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:16.899761  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:16.899791  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:16.899708  380682 retry.go:31] will retry after 2.701894127s: waiting for machine to come up
	I0108 22:35:19.605036  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:19.605624  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:19.605652  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:19.605586  380682 retry.go:31] will retry after 3.62067067s: waiting for machine to come up
	I0108 22:35:23.227405  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:23.227879  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:23.227924  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:23.227827  380682 retry.go:31] will retry after 3.172675173s: waiting for machine to come up
	I0108 22:35:26.402974  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:26.403451  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find current IP address of domain newest-cni-154365 in network mk-newest-cni-154365
	I0108 22:35:26.403490  380658 main.go:141] libmachine: (newest-cni-154365) DBG | I0108 22:35:26.403345  380682 retry.go:31] will retry after 5.398315404s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:15:20 UTC, ends at Mon 2024-01-08 22:35:30 UTC. --
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.005831166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c24ea01-227d-4909-bb6b-979a688f2256 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.011006687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f9825bc0-b1d8-4fe2-9438-52924547d1ed name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.011092843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f9825bc0-b1d8-4fe2-9438-52924547d1ed name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.011307150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f9825bc0-b1d8-4fe2-9438-52924547d1ed name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.069305364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b71848f1-2cec-429f-a7b6-d55945665f6a name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.069414957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b71848f1-2cec-429f-a7b6-d55945665f6a name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.071794048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9cefe236-6bf5-4145-83d8-0d2294641344 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.072637447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753330072612920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=9cefe236-6bf5-4145-83d8-0d2294641344 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.074315668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4d818708-e985-4378-9a00-ae426038d98e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.074403345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4d818708-e985-4378-9a00-ae426038d98e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.074629315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4d818708-e985-4378-9a00-ae426038d98e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.137624657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=89839751-7de9-454a-8e86-46dba2165649 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.137847360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=89839751-7de9-454a-8e86-46dba2165649 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.140117637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c8e1be27-0295-465c-bfeb-242829ebd0b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.140613063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753330140591594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=c8e1be27-0295-465c-bfeb-242829ebd0b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.141776443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f95dedae-eb51-48f1-8d37-d31f7844e5bc name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.141884234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f95dedae-eb51-48f1-8d37-d31f7844e5bc name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.142089438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f95dedae-eb51-48f1-8d37-d31f7844e5bc name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.190526108Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=30173622-fd93-4708-a22b-34736b9a31c1 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.190640376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=30173622-fd93-4708-a22b-34736b9a31c1 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.192307025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8565ccdb-d3e8-4c72-afa3-8c0f350ead7d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.192657144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753330192644857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=8565ccdb-d3e8-4c72-afa3-8c0f350ead7d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.193307733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0486cc39-98f5-44cf-9009-5261c55f1f3b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.193399870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0486cc39-98f5-44cf-9009-5261c55f1f3b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:35:30 no-preload-675668 crio[728]: time="2024-01-08 22:35:30.193620171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51,PodSandboxId:adfcbf086da7fe05126d54bbd9c86f5e67d9050368718238923143bf1d3cae52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704752465652665361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c64608-a169-455b-a5e9-0ecb4161432c,},Annotations:map[string]string{io.kubernetes.container.hash: c585c291,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1,PodSandboxId:054b819514e40e7785a19c7a4dba0d91d6bdd3b1aa3154137140b25ce1dd1042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704752465514889849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6106f11-9345-4915-b7cc-d2671a7c4e72,},Annotations:map[string]string{io.kubernetes.container.hash: 7c626287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1,PodSandboxId:1574fec38aef2e5ca6ff2371d81e3c69d3a294a0934a94ebfed4a376a6dd83af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704752464744612391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-q6x86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cad2e0f-a7af-453d-9eaf-55b56e41e27b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b0cfff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a,PodSandboxId:437bddedb8cde408c0068598a4eef15afe0e83ba48b6ee656768499b2258792d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704752439635913133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da8d62a73b9aa74f281d065810637e52,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519,PodSandboxId:6d1529b8b59b4b346e4343b33a212981294716a8f573c87a7c822592324112ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704752439757202259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b429afd35fb35e7175a2229d9c3b42,},Annotations:map
[string]string{io.kubernetes.container.hash: c4e55e0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368,PodSandboxId:7d811d9bcf646aa74fe53a5fd83fbd7c25a658c5256f16f79b06a4cb5f2edb3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704752439731337174,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebce1fdd2b617f12922a15f46
381764a,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e,PodSandboxId:5224f1f876a4823df9640254832758936cdfa4be3a2ee49a38e7d31aeff2d237,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704752439403274680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-675668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54e50fcfb3be71e057db2a64f2cb179,},A
nnotations:map[string]string{io.kubernetes.container.hash: a74adc68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0486cc39-98f5-44cf-9009-5261c55f1f3b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e15e1c41230c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   adfcbf086da7f       storage-provisioner
	93c09e966efd8       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   14 minutes ago      Running             kube-proxy                0                   054b819514e40       kube-proxy-b2nx2
	e5f90e1ab3c3f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   1574fec38aef2       coredns-76f75df574-q6x86
	b18c1aa940c39       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   6d1529b8b59b4       etcd-no-preload-675668
	9d104fdafcd88       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   7d811d9bcf646       kube-controller-manager-no-preload-675668
	6082f16eb29f6       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   437bddedb8cde       kube-scheduler-no-preload-675668
	d24f3f60a2148       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   5224f1f876a48       kube-apiserver-no-preload-675668
	
	
	==> coredns [e5f90e1ab3c3f09ad0c7b6b26c67da10b472995a0c10715c5f4dd46dbfc4c9e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               no-preload-675668
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-675668
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=no-preload-675668
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_20_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:20:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-675668
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:35:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:31:22 +0000   Mon, 08 Jan 2024 22:20:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:31:22 +0000   Mon, 08 Jan 2024 22:20:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:31:22 +0000   Mon, 08 Jan 2024 22:20:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:31:22 +0000   Mon, 08 Jan 2024 22:20:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.153
	  Hostname:    no-preload-675668
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5be65de79214ccfa8a782e6d782b105
	  System UUID:                a5be65de-7921-4ccf-a8a7-82e6d782b105
	  Boot ID:                    cb17c24e-144a-4314-9c42-d7cf36b13e5e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-q6x86                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-675668                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-675668             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-675668    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-b2nx2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-675668             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-vb2kj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-675668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-675668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-675668 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-675668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-675668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-675668 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-675668 event: Registered Node no-preload-675668 in Controller
	
	
	==> dmesg <==
	[Jan 8 22:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072089] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.557742] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.883403] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147624] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.606727] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.208071] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.117109] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.166835] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.112168] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[  +0.250004] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[ +30.790938] systemd-fstab-generator[1342]: Ignoring "noauto" for root device
	[Jan 8 22:16] kauditd_printk_skb: 29 callbacks suppressed
	[Jan 8 22:20] systemd-fstab-generator[3911]: Ignoring "noauto" for root device
	[ +10.383523] systemd-fstab-generator[4243]: Ignoring "noauto" for root device
	[Jan 8 22:21] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [b18c1aa940c399ae4aa91216619929a59e33c80fbdba71191a2a85b11dabf519] <==
	{"level":"info","ts":"2024-01-08T22:20:42.152418Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"88da22d24bd26152","initial-advertise-peer-urls":["https://192.168.61.153:2380"],"listen-peer-urls":["https://192.168.61.153:2380"],"advertise-client-urls":["https://192.168.61.153:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.153:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T22:20:42.15247Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T22:20:42.152655Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.153:2380"}
	{"level":"info","ts":"2024-01-08T22:20:42.152671Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.153:2380"}
	{"level":"info","ts":"2024-01-08T22:20:42.523326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:42.523446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:42.523473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 received MsgPreVoteResp from 88da22d24bd26152 at term 1"}
	{"level":"info","ts":"2024-01-08T22:20:42.523488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:42.523494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 received MsgVoteResp from 88da22d24bd26152 at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:42.523504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88da22d24bd26152 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:42.523513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 88da22d24bd26152 elected leader 88da22d24bd26152 at term 2"}
	{"level":"info","ts":"2024-01-08T22:20:42.525482Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"88da22d24bd26152","local-member-attributes":"{Name:no-preload-675668 ClientURLs:[https://192.168.61.153:2379]}","request-path":"/0/members/88da22d24bd26152/attributes","cluster-id":"7dd884e79d7a6c6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T22:20:42.525797Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:20:42.526026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:20:42.526189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T22:20:42.526223Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T22:20:42.526318Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:42.52797Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7dd884e79d7a6c6","local-member-id":"88da22d24bd26152","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:42.52811Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:42.528172Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:42.529983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T22:20:42.535487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.153:2379"}
	{"level":"info","ts":"2024-01-08T22:30:42.582898Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-01-08T22:30:42.585692Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.312995ms","hash":4149445681}
	{"level":"info","ts":"2024-01-08T22:30:42.585837Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4149445681,"revision":714,"compact-revision":-1}
	
	
	==> kernel <==
	 22:35:30 up 20 min,  0 users,  load average: 0.14, 0.17, 0.17
	Linux no-preload-675668 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d24f3f60a2148e44ea5301a8f59eb20cf9511c9ac8c60444ed7b5830d0c19b1e] <==
	I0108 22:28:45.329888       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:30:44.331132       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:30:44.331526       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0108 22:30:45.331969       1 handler_proxy.go:93] no RequestInfo found in the context
	W0108 22:30:45.331969       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:30:45.332284       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:30:45.332325       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0108 22:30:45.332386       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:30:45.334410       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:31:45.333812       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:45.334033       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:31:45.334068       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:31:45.334875       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:45.334999       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:31:45.335083       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:33:45.334406       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:33:45.334652       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:33:45.334674       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:33:45.335941       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:33:45.336039       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:33:45.336052       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9d104fdafcd88a43c477305f11a00ffd535a8d1deeba4655900363dc045d1368] <==
	I0108 22:29:32.523405       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:30:02.003186       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:30:02.542975       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:30:32.010356       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:30:32.555287       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:02.018402       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:02.568482       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:32.025662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:32.579851       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:32:02.033259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:02.593340       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 22:32:11.509356       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="236.979µs"
	I0108 22:32:25.512898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="260.056µs"
	E0108 22:32:32.042405       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:32.605840       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:33:02.049837       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:02.615991       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:33:32.057553       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:32.634072       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:02.065283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:02.646109       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:32.075941       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:32.659710       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:35:02.084522       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:35:02.675484       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [93c09e966efd8af1c9cd1d094ff821d99519c7527799c4e07b649b9d5cc25ac1] <==
	I0108 22:21:06.010127       1 server_others.go:72] "Using iptables proxy"
	I0108 22:21:06.038841       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.153"]
	I0108 22:21:06.122281       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0108 22:21:06.122362       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:21:06.122379       1 server_others.go:168] "Using iptables Proxier"
	I0108 22:21:06.127422       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:21:06.127928       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0108 22:21:06.127974       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:21:06.129078       1 config.go:188] "Starting service config controller"
	I0108 22:21:06.129132       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:21:06.129152       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:21:06.129156       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:21:06.129918       1 config.go:315] "Starting node config controller"
	I0108 22:21:06.129966       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:21:06.229688       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:21:06.229844       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:21:06.230121       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6082f16eb29f651113be93d1cd0ad541005b8a3ae82b512fc8c8720362ffd20a] <==
	W0108 22:20:45.425939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:20:45.426066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:20:45.437032       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:20:45.437135       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 22:20:45.455370       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:20:45.455469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:20:45.458639       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:20:45.458812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 22:20:45.490033       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:20:45.490092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:20:45.557178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:20:45.557341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:20:45.582793       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:20:45.583035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 22:20:45.688888       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:20:45.689132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:20:45.725279       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:20:45.725509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:20:45.784380       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:20:45.784793       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:20:45.833256       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:20:45.833296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:20:45.875952       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:20:45.876028       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0108 22:20:47.847503       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:15:20 UTC, ends at Mon 2024-01-08 22:35:30 UTC. --
	Jan 08 22:32:48 no-preload-675668 kubelet[4249]: E0108 22:32:48.539243    4249 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:32:48 no-preload-675668 kubelet[4249]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:32:48 no-preload-675668 kubelet[4249]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:32:48 no-preload-675668 kubelet[4249]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:32:49 no-preload-675668 kubelet[4249]: E0108 22:32:49.487624    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:33:04 no-preload-675668 kubelet[4249]: E0108 22:33:04.489478    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:33:16 no-preload-675668 kubelet[4249]: E0108 22:33:16.487849    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:33:30 no-preload-675668 kubelet[4249]: E0108 22:33:30.492178    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:33:41 no-preload-675668 kubelet[4249]: E0108 22:33:41.488184    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:33:48 no-preload-675668 kubelet[4249]: E0108 22:33:48.536109    4249 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:33:48 no-preload-675668 kubelet[4249]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:33:48 no-preload-675668 kubelet[4249]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:33:48 no-preload-675668 kubelet[4249]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:33:54 no-preload-675668 kubelet[4249]: E0108 22:33:54.489207    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:34:06 no-preload-675668 kubelet[4249]: E0108 22:34:06.488119    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:34:17 no-preload-675668 kubelet[4249]: E0108 22:34:17.488462    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:34:31 no-preload-675668 kubelet[4249]: E0108 22:34:31.488084    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:34:46 no-preload-675668 kubelet[4249]: E0108 22:34:46.489581    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:34:48 no-preload-675668 kubelet[4249]: E0108 22:34:48.536986    4249 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:34:48 no-preload-675668 kubelet[4249]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:34:48 no-preload-675668 kubelet[4249]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:34:48 no-preload-675668 kubelet[4249]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:34:58 no-preload-675668 kubelet[4249]: E0108 22:34:58.488850    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:35:11 no-preload-675668 kubelet[4249]: E0108 22:35:11.487670    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	Jan 08 22:35:24 no-preload-675668 kubelet[4249]: E0108 22:35:24.489280    4249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vb2kj" podUID="45489720-2506-46fa-8833-02cbae6f122b"
	
	
	==> storage-provisioner [7e15e1c41230c49e7ceafc44c787366122102aebf4a34bdcd4a2e60efd992c51] <==
	I0108 22:21:05.995495       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:21:06.011409       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:21:06.011555       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:21:06.032216       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:21:06.034595       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-675668_30d73bf1-3d01-4127-a18b-ef42b5387705!
	I0108 22:21:06.040335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6141c73c-6936-478d-9a5e-025b74c98f00", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-675668_30d73bf1-3d01-4127-a18b-ef42b5387705 became leader
	I0108 22:21:06.135808       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-675668_30d73bf1-3d01-4127-a18b-ef42b5387705!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-675668 -n no-preload-675668
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-675668 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vb2kj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-675668 describe pod metrics-server-57f55c9bc5-vb2kj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-675668 describe pod metrics-server-57f55c9bc5-vb2kj: exit status 1 (73.488744ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vb2kj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-675668 describe pod metrics-server-57f55c9bc5-vb2kj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (319.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (100.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 22:34:44.964699  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:34:56.854620  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-903819 -n embed-certs-903819
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-08 22:36:13.627888521 +0000 UTC m=+5649.329010894
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-903819 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-903819 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.032µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-903819 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-903819 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-903819 logs -n 25: (2.891321906s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-523607                              | cert-expiration-523607       | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343954 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:06 UTC |
	|         | disable-driver-mounts-343954                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:09 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079759        | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC | 08 Jan 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-675668             | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-903819            | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-292054  | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC | 08 Jan 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079759             | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-675668                  | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-903819                 | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-292054       | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:26 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	| start   | -p newest-cni-154365 --memory=2200 --alsologtostderr   | newest-cni-154365            | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:36 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	| start   | -p auto-587823 --memory=3072                           | auto-587823                  | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-154365             | newest-cni-154365            | jenkins | v1.32.0 | 08 Jan 24 22:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:35:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:35:32.814142  381248 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:35:32.814329  381248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:35:32.814340  381248 out.go:309] Setting ErrFile to fd 2...
	I0108 22:35:32.814348  381248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:35:32.814581  381248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:35:32.815193  381248 out.go:303] Setting JSON to false
	I0108 22:35:32.816333  381248 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11859,"bootTime":1704741474,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:35:32.816410  381248 start.go:138] virtualization: kvm guest
	I0108 22:35:32.820216  381248 out.go:177] * [auto-587823] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:35:32.822220  381248 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:35:32.822298  381248 notify.go:220] Checking for updates...
	I0108 22:35:32.825822  381248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:35:32.827770  381248 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:35:32.829485  381248 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:32.831288  381248 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:35:32.832813  381248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:35:32.834527  381248 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:35:32.834630  381248 config.go:182] Loaded profile config "embed-certs-903819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:35:32.834724  381248 config.go:182] Loaded profile config "newest-cni-154365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:35:32.834804  381248 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:35:32.881143  381248 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 22:35:32.882586  381248 start.go:298] selected driver: kvm2
	I0108 22:35:32.882619  381248 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:35:32.882642  381248 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:35:32.883641  381248 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:35:32.883716  381248 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:35:32.902359  381248 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:35:32.902408  381248 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 22:35:32.902657  381248 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:35:32.902720  381248 cni.go:84] Creating CNI manager for ""
	I0108 22:35:32.902730  381248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:35:32.902743  381248 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:35:32.902753  381248 start_flags.go:321] config:
	{Name:auto-587823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-587823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:35:32.903271  381248 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:35:32.905456  381248 out.go:177] * Starting control plane node auto-587823 in cluster auto-587823
	I0108 22:35:32.906747  381248 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:35:32.906801  381248 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:35:32.906818  381248 cache.go:56] Caching tarball of preloaded images
	I0108 22:35:32.906917  381248 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:35:32.906934  381248 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:35:32.907041  381248 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/auto-587823/config.json ...
	I0108 22:35:32.907065  381248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/auto-587823/config.json: {Name:mk4d3614e9a132cb4290991126e37a1a60c738ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:35:32.907266  381248 start.go:365] acquiring machines lock for auto-587823: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:35:34.177302  381248 start.go:369] acquired machines lock for "auto-587823" in 1.269987766s
	I0108 22:35:34.177378  381248 start.go:93] Provisioning new machine with config: &{Name:auto-587823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:auto-587823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:35:34.177520  381248 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 22:35:31.803682  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:31.865029  380658 main.go:141] libmachine: (newest-cni-154365) Found IP for machine: 192.168.39.87
	I0108 22:35:31.865060  380658 main.go:141] libmachine: (newest-cni-154365) Reserving static IP address...
	I0108 22:35:31.865084  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has current primary IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:31.865650  380658 main.go:141] libmachine: (newest-cni-154365) DBG | unable to find host DHCP lease matching {name: "newest-cni-154365", mac: "52:54:00:a3:78:62", ip: "192.168.39.87"} in network mk-newest-cni-154365
	I0108 22:35:32.449263  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Getting to WaitForSSH function...
	I0108 22:35:32.449301  380658 main.go:141] libmachine: (newest-cni-154365) Reserved static IP address: 192.168.39.87
	I0108 22:35:32.449317  380658 main.go:141] libmachine: (newest-cni-154365) Waiting for SSH to be available...
	I0108 22:35:32.452337  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:32.452912  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:32.452955  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:32.453248  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Using SSH client type: external
	I0108 22:35:32.453286  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa (-rw-------)
	I0108 22:35:32.453317  380658 main.go:141] libmachine: (newest-cni-154365) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:35:32.453330  380658 main.go:141] libmachine: (newest-cni-154365) DBG | About to run SSH command:
	I0108 22:35:32.453342  380658 main.go:141] libmachine: (newest-cni-154365) DBG | exit 0
	I0108 22:35:32.559737  380658 main.go:141] libmachine: (newest-cni-154365) DBG | SSH cmd err, output: <nil>: 
	I0108 22:35:32.560100  380658 main.go:141] libmachine: (newest-cni-154365) KVM machine creation complete!
	I0108 22:35:32.560482  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetConfigRaw
	I0108 22:35:32.561304  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:32.561553  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:32.561779  380658 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 22:35:32.561799  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetState
	I0108 22:35:32.563662  380658 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 22:35:32.563722  380658 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 22:35:32.563731  380658 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 22:35:32.563743  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:32.566810  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:32.567260  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:32.567291  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:32.567529  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:32.567804  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:32.567992  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:32.568185  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:32.568377  380658 main.go:141] libmachine: Using SSH client type: native
	I0108 22:35:32.568830  380658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0108 22:35:32.568846  380658 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 22:35:32.702837  380658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:35:32.702867  380658 main.go:141] libmachine: Detecting the provisioner...
	I0108 22:35:32.702876  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:32.706491  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:32.706971  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:32.707031  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:32.707352  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:32.707704  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:32.707917  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:32.708149  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:32.708304  380658 main.go:141] libmachine: Using SSH client type: native
	I0108 22:35:32.708632  380658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0108 22:35:32.708645  380658 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 22:35:32.853040  380658 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 22:35:32.853164  380658 main.go:141] libmachine: found compatible host: buildroot
	I0108 22:35:32.853180  380658 main.go:141] libmachine: Provisioning with buildroot...
	I0108 22:35:32.853197  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetMachineName
	I0108 22:35:32.853511  380658 buildroot.go:166] provisioning hostname "newest-cni-154365"
	I0108 22:35:32.853540  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetMachineName
	I0108 22:35:32.853788  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:32.858167  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:32.858709  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:32.858745  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:32.858998  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:32.859312  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:32.859605  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:32.859841  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:32.860056  380658 main.go:141] libmachine: Using SSH client type: native
	I0108 22:35:32.860574  380658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0108 22:35:32.860590  380658 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-154365 && echo "newest-cni-154365" | sudo tee /etc/hostname
	I0108 22:35:33.016115  380658 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-154365
	
	I0108 22:35:33.016160  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:33.020021  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.020387  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:33.020430  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.020641  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:33.020909  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:33.021115  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:33.021240  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:33.021399  380658 main.go:141] libmachine: Using SSH client type: native
	I0108 22:35:33.021756  380658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0108 22:35:33.021776  380658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-154365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-154365/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-154365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:35:33.164639  380658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:35:33.164692  380658 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:35:33.164756  380658 buildroot.go:174] setting up certificates
	I0108 22:35:33.164786  380658 provision.go:83] configureAuth start
	I0108 22:35:33.164806  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetMachineName
	I0108 22:35:33.165224  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetIP
	I0108 22:35:33.168604  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.169114  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:33.169144  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.169416  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:33.172044  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.172398  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:33.172448  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.172652  380658 provision.go:138] copyHostCerts
	I0108 22:35:33.172758  380658 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:35:33.172777  380658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:35:33.172874  380658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:35:33.172993  380658 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:35:33.173005  380658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:35:33.173030  380658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:35:33.173090  380658 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:35:33.173097  380658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:35:33.173116  380658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:35:33.173171  380658 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.newest-cni-154365 san=[192.168.39.87 192.168.39.87 localhost 127.0.0.1 minikube newest-cni-154365]
	I0108 22:35:33.338052  380658 provision.go:172] copyRemoteCerts
	I0108 22:35:33.338142  380658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:35:33.338174  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:33.340969  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.341324  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:33.341355  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.341582  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:33.341799  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:33.342019  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:33.342210  380658 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa Username:docker}
	I0108 22:35:33.440270  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 22:35:33.467764  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:35:33.493915  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:35:33.520119  380658 provision.go:86] duration metric: configureAuth took 355.309724ms
	I0108 22:35:33.520174  380658 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:35:33.520409  380658 config.go:182] Loaded profile config "newest-cni-154365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:35:33.520509  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:33.524292  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.524624  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:33.524649  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.524953  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:33.525267  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:33.525476  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:33.525649  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:33.525828  380658 main.go:141] libmachine: Using SSH client type: native
	I0108 22:35:33.526210  380658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0108 22:35:33.526230  380658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:35:33.880418  380658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:35:33.880474  380658 main.go:141] libmachine: Checking connection to Docker...
	I0108 22:35:33.880486  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetURL
	I0108 22:35:33.881896  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Using libvirt version 6000000
	I0108 22:35:33.884883  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.885339  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:33.885375  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.885605  380658 main.go:141] libmachine: Docker is up and running!
	I0108 22:35:33.885623  380658 main.go:141] libmachine: Reticulating splines...
	I0108 22:35:33.885632  380658 client.go:171] LocalClient.Create took 27.270207394s
	I0108 22:35:33.885659  380658 start.go:167] duration metric: libmachine.API.Create for "newest-cni-154365" took 27.270311876s
	I0108 22:35:33.885670  380658 start.go:300] post-start starting for "newest-cni-154365" (driver="kvm2")
	I0108 22:35:33.885684  380658 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:35:33.885705  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:33.886041  380658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:35:33.886090  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:33.889303  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.889693  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:33.889729  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:33.889946  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:33.890174  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:33.890308  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:33.890526  380658 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa Username:docker}
	I0108 22:35:33.990078  380658 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:35:33.995296  380658 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:35:33.995387  380658 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:35:33.995492  380658 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:35:33.995669  380658 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:35:33.995846  380658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:35:34.006803  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:35:34.032116  380658 start.go:303] post-start completed in 146.42377ms
	I0108 22:35:34.032200  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetConfigRaw
	I0108 22:35:34.033081  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetIP
	I0108 22:35:34.036610  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.037141  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:34.037186  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.037506  380658 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/config.json ...
	I0108 22:35:34.037759  380658 start.go:128] duration metric: createHost completed in 27.445165574s
	I0108 22:35:34.037792  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:34.040914  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.041300  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:34.041326  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.041551  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:34.041868  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:34.042058  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:34.042192  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:34.042430  380658 main.go:141] libmachine: Using SSH client type: native
	I0108 22:35:34.042847  380658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0108 22:35:34.042863  380658 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:35:34.177107  380658 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704753334.160446122
	
	I0108 22:35:34.177139  380658 fix.go:206] guest clock: 1704753334.160446122
	I0108 22:35:34.177151  380658 fix.go:219] Guest: 2024-01-08 22:35:34.160446122 +0000 UTC Remote: 2024-01-08 22:35:34.03777421 +0000 UTC m=+27.594528438 (delta=122.671912ms)
	I0108 22:35:34.177178  380658 fix.go:190] guest clock delta is within tolerance: 122.671912ms
	I0108 22:35:34.177184  380658 start.go:83] releasing machines lock for "newest-cni-154365", held for 27.585085994s
	I0108 22:35:34.177220  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:34.177588  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetIP
	I0108 22:35:34.180777  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.181152  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:34.181196  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.181385  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:34.182043  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:34.182299  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:35:34.182414  380658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:35:34.182473  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:34.182552  380658 ssh_runner.go:195] Run: cat /version.json
	I0108 22:35:34.182579  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:35:34.186352  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.186390  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.187195  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:34.187232  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.187271  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:34.187292  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:34.187798  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:34.187821  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:35:34.188113  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:34.188125  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:35:34.188341  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:34.188354  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:35:34.188490  380658 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa Username:docker}
	I0108 22:35:34.188598  380658 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa Username:docker}
	I0108 22:35:34.314853  380658 ssh_runner.go:195] Run: systemctl --version
	I0108 22:35:34.323910  380658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:35:34.498183  380658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:35:34.504832  380658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:35:34.504929  380658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:35:34.523251  380658 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:35:34.523285  380658 start.go:475] detecting cgroup driver to use...
	I0108 22:35:34.523434  380658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:35:34.538999  380658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:35:34.555123  380658 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:35:34.555207  380658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:35:34.570319  380658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:35:34.586481  380658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:35:34.706284  380658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:35:34.838553  380658 docker.go:219] disabling docker service ...
	I0108 22:35:34.838673  380658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:35:34.854398  380658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:35:34.873458  380658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:35:34.990869  380658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:35:35.117636  380658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:35:35.134667  380658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:35:35.156769  380658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:35:35.156854  380658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:35:35.171009  380658 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:35:35.171094  380658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:35:35.183309  380658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:35:35.196157  380658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:35:35.207983  380658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:35:35.222189  380658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:35:35.233240  380658 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:35:35.233339  380658 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:35:35.249390  380658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:35:35.260800  380658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:35:35.411752  380658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:35:35.624966  380658 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:35:35.625138  380658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:35:35.632185  380658 start.go:543] Will wait 60s for crictl version
	I0108 22:35:35.632259  380658 ssh_runner.go:195] Run: which crictl
	I0108 22:35:35.637265  380658 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:35:35.691110  380658 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:35:35.691236  380658 ssh_runner.go:195] Run: crio --version
	I0108 22:35:35.749440  380658 ssh_runner.go:195] Run: crio --version
	I0108 22:35:35.808110  380658 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0108 22:35:35.810074  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetIP
	I0108 22:35:35.815955  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:35.816553  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:35:35.816599  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:35:35.816878  380658 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 22:35:35.822708  380658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:35:35.837039  380658 localpath.go:92] copying /home/jenkins/minikube-integration/17866-334768/.minikube/client.crt -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/client.crt
	I0108 22:35:35.837331  380658 localpath.go:117] copying /home/jenkins/minikube-integration/17866-334768/.minikube/client.key -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/client.key
	I0108 22:35:35.839401  380658 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0108 22:35:35.841093  380658 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:35:35.841205  380658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:35:35.892630  380658 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0108 22:35:35.892716  380658 ssh_runner.go:195] Run: which lz4
	I0108 22:35:35.898990  380658 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:35:35.906352  380658 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:35:35.906407  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401795125 bytes)
	I0108 22:35:34.180837  381248 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0108 22:35:34.181059  381248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:35:34.181117  381248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:35:34.202460  381248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I0108 22:35:34.203011  381248 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:35:34.203773  381248 main.go:141] libmachine: Using API Version  1
	I0108 22:35:34.203803  381248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:35:34.204295  381248 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:35:34.204585  381248 main.go:141] libmachine: (auto-587823) Calling .GetMachineName
	I0108 22:35:34.204767  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:35:34.205021  381248 start.go:159] libmachine.API.Create for "auto-587823" (driver="kvm2")
	I0108 22:35:34.205056  381248 client.go:168] LocalClient.Create starting
	I0108 22:35:34.205102  381248 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 22:35:34.205162  381248 main.go:141] libmachine: Decoding PEM data...
	I0108 22:35:34.205210  381248 main.go:141] libmachine: Parsing certificate...
	I0108 22:35:34.205306  381248 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 22:35:34.205346  381248 main.go:141] libmachine: Decoding PEM data...
	I0108 22:35:34.205364  381248 main.go:141] libmachine: Parsing certificate...
	I0108 22:35:34.205401  381248 main.go:141] libmachine: Running pre-create checks...
	I0108 22:35:34.205420  381248 main.go:141] libmachine: (auto-587823) Calling .PreCreateCheck
	I0108 22:35:34.206021  381248 main.go:141] libmachine: (auto-587823) Calling .GetConfigRaw
	I0108 22:35:34.206601  381248 main.go:141] libmachine: Creating machine...
	I0108 22:35:34.206618  381248 main.go:141] libmachine: (auto-587823) Calling .Create
	I0108 22:35:34.206791  381248 main.go:141] libmachine: (auto-587823) Creating KVM machine...
	I0108 22:35:34.208471  381248 main.go:141] libmachine: (auto-587823) DBG | found existing default KVM network
	I0108 22:35:34.210306  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:34.210102  381271 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:04:a2} reservation:<nil>}
	I0108 22:35:34.211397  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:34.211249  381271 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c1:31:02} reservation:<nil>}
	I0108 22:35:34.212798  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:34.212676  381271 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000113e20}
	I0108 22:35:34.219323  381248 main.go:141] libmachine: (auto-587823) DBG | trying to create private KVM network mk-auto-587823 192.168.61.0/24...
	I0108 22:35:34.322483  381248 main.go:141] libmachine: (auto-587823) DBG | private KVM network mk-auto-587823 192.168.61.0/24 created
	I0108 22:35:34.322522  381248 main.go:141] libmachine: (auto-587823) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823 ...
	I0108 22:35:34.322577  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:34.322497  381271 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:34.322599  381248 main.go:141] libmachine: (auto-587823) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 22:35:34.322747  381248 main.go:141] libmachine: (auto-587823) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 22:35:34.615580  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:34.615314  381271 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa...
	I0108 22:35:34.692699  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:34.692492  381271 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/auto-587823.rawdisk...
	I0108 22:35:34.692745  381248 main.go:141] libmachine: (auto-587823) DBG | Writing magic tar header
	I0108 22:35:34.692765  381248 main.go:141] libmachine: (auto-587823) DBG | Writing SSH key tar header
	I0108 22:35:34.692787  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:34.692623  381271 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823 ...
	I0108 22:35:34.692803  381248 main.go:141] libmachine: (auto-587823) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823 (perms=drwx------)
	I0108 22:35:34.692822  381248 main.go:141] libmachine: (auto-587823) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 22:35:34.692836  381248 main.go:141] libmachine: (auto-587823) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 22:35:34.692866  381248 main.go:141] libmachine: (auto-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823
	I0108 22:35:34.692883  381248 main.go:141] libmachine: (auto-587823) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 22:35:34.692901  381248 main.go:141] libmachine: (auto-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 22:35:34.692914  381248 main.go:141] libmachine: (auto-587823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 22:35:34.692932  381248 main.go:141] libmachine: (auto-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:35:34.692948  381248 main.go:141] libmachine: (auto-587823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 22:35:34.692969  381248 main.go:141] libmachine: (auto-587823) Creating domain...
	I0108 22:35:34.692985  381248 main.go:141] libmachine: (auto-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 22:35:34.693010  381248 main.go:141] libmachine: (auto-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 22:35:34.693025  381248 main.go:141] libmachine: (auto-587823) DBG | Checking permissions on dir: /home/jenkins
	I0108 22:35:34.693040  381248 main.go:141] libmachine: (auto-587823) DBG | Checking permissions on dir: /home
	I0108 22:35:34.693053  381248 main.go:141] libmachine: (auto-587823) DBG | Skipping /home - not owner
	I0108 22:35:34.694543  381248 main.go:141] libmachine: (auto-587823) define libvirt domain using xml: 
	I0108 22:35:34.694574  381248 main.go:141] libmachine: (auto-587823) <domain type='kvm'>
	I0108 22:35:34.694594  381248 main.go:141] libmachine: (auto-587823)   <name>auto-587823</name>
	I0108 22:35:34.694603  381248 main.go:141] libmachine: (auto-587823)   <memory unit='MiB'>3072</memory>
	I0108 22:35:34.694614  381248 main.go:141] libmachine: (auto-587823)   <vcpu>2</vcpu>
	I0108 22:35:34.694626  381248 main.go:141] libmachine: (auto-587823)   <features>
	I0108 22:35:34.694634  381248 main.go:141] libmachine: (auto-587823)     <acpi/>
	I0108 22:35:34.694643  381248 main.go:141] libmachine: (auto-587823)     <apic/>
	I0108 22:35:34.694658  381248 main.go:141] libmachine: (auto-587823)     <pae/>
	I0108 22:35:34.694669  381248 main.go:141] libmachine: (auto-587823)     
	I0108 22:35:34.694681  381248 main.go:141] libmachine: (auto-587823)   </features>
	I0108 22:35:34.694692  381248 main.go:141] libmachine: (auto-587823)   <cpu mode='host-passthrough'>
	I0108 22:35:34.694706  381248 main.go:141] libmachine: (auto-587823)   
	I0108 22:35:34.694721  381248 main.go:141] libmachine: (auto-587823)   </cpu>
	I0108 22:35:34.694734  381248 main.go:141] libmachine: (auto-587823)   <os>
	I0108 22:35:34.694746  381248 main.go:141] libmachine: (auto-587823)     <type>hvm</type>
	I0108 22:35:34.694754  381248 main.go:141] libmachine: (auto-587823)     <boot dev='cdrom'/>
	I0108 22:35:34.694762  381248 main.go:141] libmachine: (auto-587823)     <boot dev='hd'/>
	I0108 22:35:34.694768  381248 main.go:141] libmachine: (auto-587823)     <bootmenu enable='no'/>
	I0108 22:35:34.694776  381248 main.go:141] libmachine: (auto-587823)   </os>
	I0108 22:35:34.694781  381248 main.go:141] libmachine: (auto-587823)   <devices>
	I0108 22:35:34.694796  381248 main.go:141] libmachine: (auto-587823)     <disk type='file' device='cdrom'>
	I0108 22:35:34.694816  381248 main.go:141] libmachine: (auto-587823)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/boot2docker.iso'/>
	I0108 22:35:34.694828  381248 main.go:141] libmachine: (auto-587823)       <target dev='hdc' bus='scsi'/>
	I0108 22:35:34.694839  381248 main.go:141] libmachine: (auto-587823)       <readonly/>
	I0108 22:35:34.694849  381248 main.go:141] libmachine: (auto-587823)     </disk>
	I0108 22:35:34.694861  381248 main.go:141] libmachine: (auto-587823)     <disk type='file' device='disk'>
	I0108 22:35:34.694871  381248 main.go:141] libmachine: (auto-587823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 22:35:34.694884  381248 main.go:141] libmachine: (auto-587823)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/auto-587823.rawdisk'/>
	I0108 22:35:34.694897  381248 main.go:141] libmachine: (auto-587823)       <target dev='hda' bus='virtio'/>
	I0108 22:35:34.694910  381248 main.go:141] libmachine: (auto-587823)     </disk>
	I0108 22:35:34.694930  381248 main.go:141] libmachine: (auto-587823)     <interface type='network'>
	I0108 22:35:34.694944  381248 main.go:141] libmachine: (auto-587823)       <source network='mk-auto-587823'/>
	I0108 22:35:34.694957  381248 main.go:141] libmachine: (auto-587823)       <model type='virtio'/>
	I0108 22:35:34.694969  381248 main.go:141] libmachine: (auto-587823)     </interface>
	I0108 22:35:34.694982  381248 main.go:141] libmachine: (auto-587823)     <interface type='network'>
	I0108 22:35:34.694998  381248 main.go:141] libmachine: (auto-587823)       <source network='default'/>
	I0108 22:35:34.695010  381248 main.go:141] libmachine: (auto-587823)       <model type='virtio'/>
	I0108 22:35:34.695032  381248 main.go:141] libmachine: (auto-587823)     </interface>
	I0108 22:35:34.695051  381248 main.go:141] libmachine: (auto-587823)     <serial type='pty'>
	I0108 22:35:34.695066  381248 main.go:141] libmachine: (auto-587823)       <target port='0'/>
	I0108 22:35:34.695076  381248 main.go:141] libmachine: (auto-587823)     </serial>
	I0108 22:35:34.695094  381248 main.go:141] libmachine: (auto-587823)     <console type='pty'>
	I0108 22:35:34.695108  381248 main.go:141] libmachine: (auto-587823)       <target type='serial' port='0'/>
	I0108 22:35:34.695124  381248 main.go:141] libmachine: (auto-587823)     </console>
	I0108 22:35:34.695135  381248 main.go:141] libmachine: (auto-587823)     <rng model='virtio'>
	I0108 22:35:34.695149  381248 main.go:141] libmachine: (auto-587823)       <backend model='random'>/dev/random</backend>
	I0108 22:35:34.695159  381248 main.go:141] libmachine: (auto-587823)     </rng>
	I0108 22:35:34.695171  381248 main.go:141] libmachine: (auto-587823)     
	I0108 22:35:34.695184  381248 main.go:141] libmachine: (auto-587823)     
	I0108 22:35:34.695197  381248 main.go:141] libmachine: (auto-587823)   </devices>
	I0108 22:35:34.695207  381248 main.go:141] libmachine: (auto-587823) </domain>
	I0108 22:35:34.695219  381248 main.go:141] libmachine: (auto-587823) 
	I0108 22:35:34.699525  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:de:13:2b in network default
	I0108 22:35:34.700177  381248 main.go:141] libmachine: (auto-587823) Ensuring networks are active...
	I0108 22:35:34.700199  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:34.700926  381248 main.go:141] libmachine: (auto-587823) Ensuring network default is active
	I0108 22:35:34.701348  381248 main.go:141] libmachine: (auto-587823) Ensuring network mk-auto-587823 is active
	I0108 22:35:34.702004  381248 main.go:141] libmachine: (auto-587823) Getting domain xml...
	I0108 22:35:34.702793  381248 main.go:141] libmachine: (auto-587823) Creating domain...
	I0108 22:35:36.188034  381248 main.go:141] libmachine: (auto-587823) Waiting to get IP...
	I0108 22:35:36.193703  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:36.195581  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:36.195619  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:36.195421  381271 retry.go:31] will retry after 297.881518ms: waiting for machine to come up
	I0108 22:35:36.495546  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:36.496418  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:36.496442  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:36.496282  381271 retry.go:31] will retry after 376.301888ms: waiting for machine to come up
	I0108 22:35:36.874104  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:36.874681  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:36.874743  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:36.874660  381271 retry.go:31] will retry after 447.153599ms: waiting for machine to come up
	I0108 22:35:37.323589  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:37.324343  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:37.324379  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:37.324292  381271 retry.go:31] will retry after 552.958669ms: waiting for machine to come up
	I0108 22:35:37.653108  380658 crio.go:444] Took 1.754164 seconds to copy over tarball
	I0108 22:35:37.653259  380658 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:35:40.862422  380658 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.209111215s)
	I0108 22:35:40.862478  380658 crio.go:451] Took 3.209325 seconds to extract the tarball
	I0108 22:35:40.862493  380658 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:35:40.902680  380658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:35:41.018299  380658 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:35:41.018361  380658 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:35:41.018562  380658 ssh_runner.go:195] Run: crio config
	I0108 22:35:41.096015  380658 cni.go:84] Creating CNI manager for ""
	I0108 22:35:41.096050  380658 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:35:41.096089  380658 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0108 22:35:41.096193  380658 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-154365 NodeName:newest-cni-154365 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:35:41.096498  380658 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-154365"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:35:41.096621  380658 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-154365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-154365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:35:41.096701  380658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0108 22:35:41.108351  380658 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:35:41.108453  380658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:35:41.119680  380658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (418 bytes)
	I0108 22:35:41.144402  380658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0108 22:35:41.168059  380658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0108 22:35:41.189198  380658 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0108 22:35:41.194511  380658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:35:41.208713  380658 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365 for IP: 192.168.39.87
	I0108 22:35:41.208765  380658 certs.go:190] acquiring lock for shared ca certs: {Name:mk6b90f8060977fed954fc3cfa2d0a35c06dd778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:35:41.208957  380658 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key
	I0108 22:35:41.208999  380658 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key
	I0108 22:35:41.209119  380658 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/client.key
	I0108 22:35:41.209147  380658 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.key.6a3c595f
	I0108 22:35:41.209158  380658 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.crt.6a3c595f with IP's: [192.168.39.87 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:35:41.345845  380658 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.crt.6a3c595f ...
	I0108 22:35:41.345892  380658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.crt.6a3c595f: {Name:mka481c858be83af3e4bc4751ea67bf0669d8e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:35:41.346098  380658 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.key.6a3c595f ...
	I0108 22:35:41.346120  380658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.key.6a3c595f: {Name:mk0634357452a9dab0f604732a2b796e44bda751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:35:41.346195  380658 certs.go:337] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.crt.6a3c595f -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.crt
	I0108 22:35:41.346271  380658 certs.go:341] copying /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.key.6a3c595f -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.key
	I0108 22:35:41.346323  380658 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/proxy-client.key
	I0108 22:35:41.346337  380658 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/proxy-client.crt with IP's: []
	I0108 22:35:41.467645  380658 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/proxy-client.crt ...
	I0108 22:35:41.467678  380658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/proxy-client.crt: {Name:mkd6e4507850685d27f51e47c965ca163c7730c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:35:41.467842  380658 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/proxy-client.key ...
	I0108 22:35:41.467856  380658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/proxy-client.key: {Name:mk39ba71fdfb21efbaa7add8a737463bd60e152d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:35:41.468020  380658 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem (1338 bytes)
	W0108 22:35:41.468067  380658 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982_empty.pem, impossibly tiny 0 bytes
	I0108 22:35:41.468078  380658 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:35:41.468120  380658 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:35:41.468156  380658 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:35:41.468179  380658 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem (1679 bytes)
	I0108 22:35:41.468217  380658 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:35:41.468852  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:35:41.498280  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:35:37.879336  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:37.879985  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:37.880025  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:37.879932  381271 retry.go:31] will retry after 709.199143ms: waiting for machine to come up
	I0108 22:35:38.590744  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:38.591349  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:38.591404  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:38.591305  381271 retry.go:31] will retry after 736.464061ms: waiting for machine to come up
	I0108 22:35:39.329403  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:39.330394  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:39.330441  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:39.330336  381271 retry.go:31] will retry after 1.126404356s: waiting for machine to come up
	I0108 22:35:40.458632  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:40.459116  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:40.459153  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:40.459054  381271 retry.go:31] will retry after 1.284789698s: waiting for machine to come up
	I0108 22:35:41.746150  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:41.746689  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:41.746719  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:41.746652  381271 retry.go:31] will retry after 1.454476457s: waiting for machine to come up
	I0108 22:35:41.527607  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:35:41.657855  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 22:35:41.687009  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:35:41.715110  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:35:41.745878  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:35:41.775642  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:35:41.804610  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/341982.pem --> /usr/share/ca-certificates/341982.pem (1338 bytes)
	I0108 22:35:41.837319  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /usr/share/ca-certificates/3419822.pem (1708 bytes)
	I0108 22:35:41.866718  380658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:35:41.900321  380658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:35:41.920132  380658 ssh_runner.go:195] Run: openssl version
	I0108 22:35:41.927888  380658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/341982.pem && ln -fs /usr/share/ca-certificates/341982.pem /etc/ssl/certs/341982.pem"
	I0108 22:35:41.941169  380658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/341982.pem
	I0108 22:35:41.947489  380658 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 21:11 /usr/share/ca-certificates/341982.pem
	I0108 22:35:41.947606  380658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/341982.pem
	I0108 22:35:41.955619  380658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/341982.pem /etc/ssl/certs/51391683.0"
	I0108 22:35:41.969160  380658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3419822.pem && ln -fs /usr/share/ca-certificates/3419822.pem /etc/ssl/certs/3419822.pem"
	I0108 22:35:41.982633  380658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3419822.pem
	I0108 22:35:41.987723  380658 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 21:11 /usr/share/ca-certificates/3419822.pem
	I0108 22:35:41.987820  380658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3419822.pem
	I0108 22:35:41.995441  380658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3419822.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:35:42.008800  380658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:35:42.021521  380658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:35:42.027876  380658 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 21:02 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:35:42.028012  380658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:35:42.035540  380658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:35:42.048474  380658 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:35:42.053752  380658 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:35:42.053828  380658 kubeadm.go:404] StartCluster: {Name:newest-cni-154365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-154365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:35:42.053954  380658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:35:42.054056  380658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:35:42.104230  380658 cri.go:89] found id: ""
	I0108 22:35:42.104366  380658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:35:42.118432  380658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:35:42.129372  380658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:35:42.141424  380658 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:35:42.141538  380658 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:35:42.590310  380658 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:35:43.202930  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:43.203617  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:43.203650  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:43.203548  381271 retry.go:31] will retry after 2.201296848s: waiting for machine to come up
	I0108 22:35:45.406609  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:45.407196  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:45.407224  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:45.407128  381271 retry.go:31] will retry after 2.749833876s: waiting for machine to come up
	I0108 22:35:48.158717  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:48.159225  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:48.159279  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:48.159183  381271 retry.go:31] will retry after 3.335662358s: waiting for machine to come up
	I0108 22:35:51.498524  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:51.499274  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:51.499321  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:51.499175  381271 retry.go:31] will retry after 4.48092773s: waiting for machine to come up
	I0108 22:35:56.333258  380658 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0108 22:35:56.333366  380658 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:35:56.333444  380658 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:35:56.333554  380658 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:35:56.333744  380658 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:35:56.333823  380658 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:35:56.335898  380658 out.go:204]   - Generating certificates and keys ...
	I0108 22:35:56.336025  380658 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:35:56.336126  380658 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:35:56.336210  380658 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:35:56.336288  380658 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:35:56.336367  380658 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:35:56.336436  380658 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:35:56.336537  380658 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:35:56.336732  380658 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-154365] and IPs [192.168.39.87 127.0.0.1 ::1]
	I0108 22:35:56.336813  380658 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:35:56.337019  380658 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-154365] and IPs [192.168.39.87 127.0.0.1 ::1]
	I0108 22:35:56.337114  380658 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:35:56.337210  380658 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:35:56.337273  380658 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:35:56.337389  380658 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:35:56.337467  380658 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:35:56.337543  380658 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0108 22:35:56.337616  380658 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:35:56.337691  380658 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:35:56.337752  380658 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:35:56.337823  380658 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:35:56.337912  380658 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:35:56.339737  380658 out.go:204]   - Booting up control plane ...
	I0108 22:35:56.339856  380658 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:35:56.339971  380658 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:35:56.340034  380658 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:35:56.340123  380658 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:35:56.340200  380658 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:35:56.340256  380658 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:35:56.340423  380658 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:35:56.340535  380658 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.006609 seconds
	I0108 22:35:56.340662  380658 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:35:56.340850  380658 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:35:56.340933  380658 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:35:56.341155  380658 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-154365 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:35:56.341242  380658 kubeadm.go:322] [bootstrap-token] Using token: r5wdni.un3splc06gjhvpzd
	I0108 22:35:56.342643  380658 out.go:204]   - Configuring RBAC rules ...
	I0108 22:35:56.342784  380658 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:35:56.342884  380658 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:35:56.343073  380658 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:35:56.343243  380658 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:35:56.343415  380658 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:35:56.343534  380658 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:35:56.343655  380658 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:35:56.343703  380658 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:35:56.343767  380658 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:35:56.343780  380658 kubeadm.go:322] 
	I0108 22:35:56.343873  380658 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:35:56.343886  380658 kubeadm.go:322] 
	I0108 22:35:56.343975  380658 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:35:56.343988  380658 kubeadm.go:322] 
	I0108 22:35:56.344019  380658 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:35:56.344087  380658 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:35:56.344141  380658 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:35:56.344150  380658 kubeadm.go:322] 
	I0108 22:35:56.344232  380658 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:35:56.344243  380658 kubeadm.go:322] 
	I0108 22:35:56.344303  380658 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:35:56.344318  380658 kubeadm.go:322] 
	I0108 22:35:56.344378  380658 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:35:56.344461  380658 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:35:56.344550  380658 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:35:56.344558  380658 kubeadm.go:322] 
	I0108 22:35:56.344654  380658 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:35:56.344765  380658 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:35:56.344775  380658 kubeadm.go:322] 
	I0108 22:35:56.344875  380658 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token r5wdni.un3splc06gjhvpzd \
	I0108 22:35:56.344991  380658 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:35:56.345042  380658 kubeadm.go:322] 	--control-plane 
	I0108 22:35:56.345060  380658 kubeadm.go:322] 
	I0108 22:35:56.345176  380658 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:35:56.345188  380658 kubeadm.go:322] 
	I0108 22:35:56.345277  380658 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token r5wdni.un3splc06gjhvpzd \
	I0108 22:35:56.345419  380658 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:35:56.345441  380658 cni.go:84] Creating CNI manager for ""
	I0108 22:35:56.345461  380658 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:35:56.348111  380658 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:35:56.349728  380658 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:35:56.400481  380658 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:35:56.447634  380658 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:35:56.447741  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:35:56.447802  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=newest-cni-154365 minikube.k8s.io/updated_at=2024_01_08T22_35_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:35:55.982824  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:35:55.983440  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find current IP address of domain auto-587823 in network mk-auto-587823
	I0108 22:35:55.983473  381248 main.go:141] libmachine: (auto-587823) DBG | I0108 22:35:55.983377  381271 retry.go:31] will retry after 5.548580795s: waiting for machine to come up
	I0108 22:35:56.536915  380658 ops.go:34] apiserver oom_adj: -16
	I0108 22:35:56.977124  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:35:57.478032  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:35:57.977580  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:35:58.477784  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:35:58.977407  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:35:59.478151  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:35:59.977149  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:00.477431  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:00.977412  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:01.477910  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:01.535480  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:01.536101  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has current primary IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:01.536127  381248 main.go:141] libmachine: (auto-587823) Found IP for machine: 192.168.61.208
	I0108 22:36:01.536142  381248 main.go:141] libmachine: (auto-587823) Reserving static IP address...
	I0108 22:36:01.536532  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find host DHCP lease matching {name: "auto-587823", mac: "52:54:00:59:74:93", ip: "192.168.61.208"} in network mk-auto-587823
	I0108 22:36:01.639448  381248 main.go:141] libmachine: (auto-587823) DBG | Getting to WaitForSSH function...
	I0108 22:36:01.639494  381248 main.go:141] libmachine: (auto-587823) Reserved static IP address: 192.168.61.208
	I0108 22:36:01.639511  381248 main.go:141] libmachine: (auto-587823) Waiting for SSH to be available...
	I0108 22:36:01.642823  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:01.643216  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823
	I0108 22:36:01.643242  381248 main.go:141] libmachine: (auto-587823) DBG | unable to find defined IP address of network mk-auto-587823 interface with MAC address 52:54:00:59:74:93
	I0108 22:36:01.643379  381248 main.go:141] libmachine: (auto-587823) DBG | Using SSH client type: external
	I0108 22:36:01.643412  381248 main.go:141] libmachine: (auto-587823) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa (-rw-------)
	I0108 22:36:01.643459  381248 main.go:141] libmachine: (auto-587823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:36:01.643477  381248 main.go:141] libmachine: (auto-587823) DBG | About to run SSH command:
	I0108 22:36:01.643515  381248 main.go:141] libmachine: (auto-587823) DBG | exit 0
	I0108 22:36:01.648031  381248 main.go:141] libmachine: (auto-587823) DBG | SSH cmd err, output: exit status 255: 
	I0108 22:36:01.648070  381248 main.go:141] libmachine: (auto-587823) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0108 22:36:01.648083  381248 main.go:141] libmachine: (auto-587823) DBG | command : exit 0
	I0108 22:36:01.648114  381248 main.go:141] libmachine: (auto-587823) DBG | err     : exit status 255
	I0108 22:36:01.648127  381248 main.go:141] libmachine: (auto-587823) DBG | output  : 
	I0108 22:36:01.977965  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:02.478048  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:02.977809  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:03.477626  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:03.977947  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:04.477968  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:04.977182  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:05.477221  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:05.978022  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:06.477297  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:04.648141  381248 main.go:141] libmachine: (auto-587823) DBG | Getting to WaitForSSH function...
	I0108 22:36:04.651350  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:04.651836  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:04.651877  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:04.651987  381248 main.go:141] libmachine: (auto-587823) DBG | Using SSH client type: external
	I0108 22:36:04.652021  381248 main.go:141] libmachine: (auto-587823) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa (-rw-------)
	I0108 22:36:04.652079  381248 main.go:141] libmachine: (auto-587823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:36:04.652103  381248 main.go:141] libmachine: (auto-587823) DBG | About to run SSH command:
	I0108 22:36:04.652122  381248 main.go:141] libmachine: (auto-587823) DBG | exit 0
	I0108 22:36:04.796493  381248 main.go:141] libmachine: (auto-587823) DBG | SSH cmd err, output: <nil>: 
	I0108 22:36:04.796865  381248 main.go:141] libmachine: (auto-587823) KVM machine creation complete!
	I0108 22:36:04.797273  381248 main.go:141] libmachine: (auto-587823) Calling .GetConfigRaw
	I0108 22:36:04.797995  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:04.798306  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:04.798580  381248 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 22:36:04.798603  381248 main.go:141] libmachine: (auto-587823) Calling .GetState
	I0108 22:36:04.800618  381248 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 22:36:04.800640  381248 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 22:36:04.800648  381248 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 22:36:04.800656  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:04.804321  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:04.804955  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:04.805000  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:04.805242  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:04.805541  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:04.805735  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:04.805881  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:04.806092  381248 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:04.806518  381248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.208 22 <nil> <nil>}
	I0108 22:36:04.806536  381248 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 22:36:04.942800  381248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:36:04.942829  381248 main.go:141] libmachine: Detecting the provisioner...
	I0108 22:36:04.942838  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:04.946110  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:04.946581  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:04.946615  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:04.946897  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:04.947187  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:04.947423  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:04.947611  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:04.947834  381248 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:04.948300  381248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.208 22 <nil> <nil>}
	I0108 22:36:04.948319  381248 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 22:36:05.089914  381248 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 22:36:05.090049  381248 main.go:141] libmachine: found compatible host: buildroot
	I0108 22:36:05.090065  381248 main.go:141] libmachine: Provisioning with buildroot...
	I0108 22:36:05.090078  381248 main.go:141] libmachine: (auto-587823) Calling .GetMachineName
	I0108 22:36:05.090410  381248 buildroot.go:166] provisioning hostname "auto-587823"
	I0108 22:36:05.090440  381248 main.go:141] libmachine: (auto-587823) Calling .GetMachineName
	I0108 22:36:05.090595  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:05.094053  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.094506  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:05.094547  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.094731  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:05.094979  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:05.095210  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:05.095428  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:05.095627  381248 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:05.095951  381248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.208 22 <nil> <nil>}
	I0108 22:36:05.095971  381248 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-587823 && echo "auto-587823" | sudo tee /etc/hostname
	I0108 22:36:05.251796  381248 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-587823
	
	I0108 22:36:05.251834  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:05.254949  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.255423  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:05.255454  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.255729  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:05.255968  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:05.256176  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:05.256381  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:05.256604  381248 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:05.257025  381248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.208 22 <nil> <nil>}
	I0108 22:36:05.257042  381248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-587823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-587823/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-587823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:36:05.411833  381248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:36:05.411888  381248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:36:05.411972  381248 buildroot.go:174] setting up certificates
	I0108 22:36:05.411992  381248 provision.go:83] configureAuth start
	I0108 22:36:05.412013  381248 main.go:141] libmachine: (auto-587823) Calling .GetMachineName
	I0108 22:36:05.412348  381248 main.go:141] libmachine: (auto-587823) Calling .GetIP
	I0108 22:36:05.416102  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.416485  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:05.416519  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.416760  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:05.419212  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.419617  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:05.419654  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.419821  381248 provision.go:138] copyHostCerts
	I0108 22:36:05.419905  381248 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:36:05.419929  381248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:36:05.420007  381248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:36:05.420158  381248 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:36:05.420172  381248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:36:05.420205  381248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:36:05.420302  381248 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:36:05.420315  381248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:36:05.420367  381248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:36:05.420465  381248 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.auto-587823 san=[192.168.61.208 192.168.61.208 localhost 127.0.0.1 minikube auto-587823]
	I0108 22:36:05.722904  381248 provision.go:172] copyRemoteCerts
	I0108 22:36:05.723046  381248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:36:05.723103  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:05.726782  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.727134  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:05.727172  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.727470  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:05.727768  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:05.727996  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:05.728231  381248 sshutil.go:53] new ssh client: &{IP:192.168.61.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa Username:docker}
	I0108 22:36:05.831455  381248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:36:05.861757  381248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0108 22:36:05.889226  381248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:36:05.915717  381248 provision.go:86] duration metric: configureAuth took 503.698396ms
	I0108 22:36:05.915760  381248 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:36:05.916016  381248 config.go:182] Loaded profile config "auto-587823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:36:05.916136  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:05.919547  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.919992  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:05.920042  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:05.920418  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:05.920751  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:05.921015  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:05.921246  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:05.921484  381248 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:05.921842  381248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.208 22 <nil> <nil>}
	I0108 22:36:05.921867  381248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:36:06.290476  381248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:36:06.290511  381248 main.go:141] libmachine: Checking connection to Docker...
	I0108 22:36:06.290523  381248 main.go:141] libmachine: (auto-587823) Calling .GetURL
	I0108 22:36:06.291889  381248 main.go:141] libmachine: (auto-587823) DBG | Using libvirt version 6000000
	I0108 22:36:06.294855  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.295338  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:06.295422  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.295611  381248 main.go:141] libmachine: Docker is up and running!
	I0108 22:36:06.295632  381248 main.go:141] libmachine: Reticulating splines...
	I0108 22:36:06.295641  381248 client.go:171] LocalClient.Create took 32.090575258s
	I0108 22:36:06.295671  381248 start.go:167] duration metric: libmachine.API.Create for "auto-587823" took 32.090651873s
	I0108 22:36:06.295681  381248 start.go:300] post-start starting for "auto-587823" (driver="kvm2")
	I0108 22:36:06.295697  381248 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:36:06.295722  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:06.296008  381248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:36:06.296034  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:06.298784  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.299105  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:06.299123  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.299281  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:06.299536  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:06.299712  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:06.299879  381248 sshutil.go:53] new ssh client: &{IP:192.168.61.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa Username:docker}
	I0108 22:36:06.400920  381248 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:36:06.406408  381248 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:36:06.406451  381248 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:36:06.406565  381248 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:36:06.406699  381248 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:36:06.406836  381248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:36:06.418598  381248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:36:06.445955  381248 start.go:303] post-start completed in 150.24751ms
	I0108 22:36:06.446035  381248 main.go:141] libmachine: (auto-587823) Calling .GetConfigRaw
	I0108 22:36:06.447072  381248 main.go:141] libmachine: (auto-587823) Calling .GetIP
	I0108 22:36:06.450081  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.450584  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:06.450628  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.450961  381248 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/auto-587823/config.json ...
	I0108 22:36:06.451187  381248 start.go:128] duration metric: createHost completed in 32.273652867s
	I0108 22:36:06.451220  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:06.454020  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.454461  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:06.454502  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.454673  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:06.454913  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:06.455087  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:06.455296  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:06.455507  381248 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:06.455966  381248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.208 22 <nil> <nil>}
	I0108 22:36:06.455984  381248 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:36:06.600698  381248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704753366.587225665
	
	I0108 22:36:06.600729  381248 fix.go:206] guest clock: 1704753366.587225665
	I0108 22:36:06.600740  381248 fix.go:219] Guest: 2024-01-08 22:36:06.587225665 +0000 UTC Remote: 2024-01-08 22:36:06.451201298 +0000 UTC m=+33.700441131 (delta=136.024367ms)
	I0108 22:36:06.600765  381248 fix.go:190] guest clock delta is within tolerance: 136.024367ms
	I0108 22:36:06.600770  381248 start.go:83] releasing machines lock for "auto-587823", held for 32.423434104s
	I0108 22:36:06.600791  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:06.601145  381248 main.go:141] libmachine: (auto-587823) Calling .GetIP
	I0108 22:36:06.604678  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.605223  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:06.605260  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.605478  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:06.606373  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:06.606676  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:06.606779  381248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:36:06.606858  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:06.607020  381248 ssh_runner.go:195] Run: cat /version.json
	I0108 22:36:06.607051  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:06.610799  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.610948  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.611209  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:06.611244  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.611549  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:06.611568  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:06.611611  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:06.611855  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:06.612054  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:06.612140  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:06.612237  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:06.612351  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:06.612413  381248 sshutil.go:53] new ssh client: &{IP:192.168.61.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa Username:docker}
	I0108 22:36:06.612514  381248 sshutil.go:53] new ssh client: &{IP:192.168.61.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa Username:docker}
	I0108 22:36:06.708959  381248 ssh_runner.go:195] Run: systemctl --version
	I0108 22:36:06.734127  381248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:36:06.904822  381248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:36:06.913182  381248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:36:06.913284  381248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:36:06.933716  381248 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:36:06.933748  381248 start.go:475] detecting cgroup driver to use...
	I0108 22:36:06.933863  381248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:36:06.955914  381248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:36:06.972573  381248 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:36:06.972669  381248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:36:06.988339  381248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:36:07.008503  381248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:36:07.130879  381248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:36:07.272788  381248 docker.go:219] disabling docker service ...
	I0108 22:36:07.272894  381248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:36:07.288217  381248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:36:07.303635  381248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:36:07.435097  381248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:36:07.556382  381248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:36:07.570975  381248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:36:07.592511  381248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:36:07.592602  381248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:36:07.603785  381248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:36:07.603878  381248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:36:07.614379  381248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:36:07.624989  381248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:36:07.635021  381248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:36:07.645559  381248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:36:07.656937  381248 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:36:07.657037  381248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:36:07.672814  381248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:36:07.684782  381248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:36:07.813675  381248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:36:08.153285  381248 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:36:08.153418  381248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:36:08.159863  381248 start.go:543] Will wait 60s for crictl version
	I0108 22:36:08.159972  381248 ssh_runner.go:195] Run: which crictl
	I0108 22:36:08.164368  381248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:36:08.214891  381248 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:36:08.214981  381248 ssh_runner.go:195] Run: crio --version
	I0108 22:36:08.277812  381248 ssh_runner.go:195] Run: crio --version
	I0108 22:36:08.367705  381248 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:36:06.978174  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:07.477793  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:07.978176  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:08.477860  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:08.977832  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:09.477716  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:09.977512  380658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:10.200695  380658 kubeadm.go:1088] duration metric: took 13.753020214s to wait for elevateKubeSystemPrivileges.
	I0108 22:36:10.200744  380658 kubeadm.go:406] StartCluster complete in 28.146921486s
	I0108 22:36:10.200772  380658 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:36:10.200896  380658 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:36:10.203640  380658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:36:10.207442  380658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:36:10.207801  380658 config.go:182] Loaded profile config "newest-cni-154365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:36:10.207920  380658 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:36:10.208024  380658 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-154365"
	I0108 22:36:10.208082  380658 addons.go:237] Setting addon storage-provisioner=true in "newest-cni-154365"
	I0108 22:36:10.208172  380658 host.go:66] Checking if "newest-cni-154365" exists ...
	I0108 22:36:10.208229  380658 addons.go:69] Setting default-storageclass=true in profile "newest-cni-154365"
	I0108 22:36:10.208266  380658 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-154365"
	I0108 22:36:10.208703  380658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:10.208755  380658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:10.208774  380658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:10.208782  380658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:10.232098  380658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0108 22:36:10.232687  380658 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:10.233370  380658 main.go:141] libmachine: Using API Version  1
	I0108 22:36:10.233405  380658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:10.233830  380658 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:10.234565  380658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:10.234603  380658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:10.238764  380658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0108 22:36:10.239506  380658 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:10.240208  380658 main.go:141] libmachine: Using API Version  1
	I0108 22:36:10.240242  380658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:10.240822  380658 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:10.241660  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetState
	I0108 22:36:10.251276  380658 addons.go:237] Setting addon default-storageclass=true in "newest-cni-154365"
	I0108 22:36:10.251393  380658 host.go:66] Checking if "newest-cni-154365" exists ...
	I0108 22:36:10.251946  380658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:10.252026  380658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:10.263034  380658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I0108 22:36:10.264853  380658 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:10.265506  380658 main.go:141] libmachine: Using API Version  1
	I0108 22:36:10.265536  380658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:10.265980  380658 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:10.266178  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetState
	I0108 22:36:10.268446  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:36:10.273191  380658 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:36:10.275310  380658 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:36:10.275338  380658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:36:10.275561  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:36:10.281045  380658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I0108 22:36:10.281216  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:36:10.281729  380658 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:10.281897  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:36:10.281916  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:36:10.282221  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:36:10.282587  380658 main.go:141] libmachine: Using API Version  1
	I0108 22:36:10.282607  380658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:10.282680  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:36:10.282891  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:36:10.283011  380658 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:10.283072  380658 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa Username:docker}
	I0108 22:36:10.284603  380658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:10.284646  380658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:10.310228  380658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0108 22:36:10.311223  380658 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:10.312104  380658 main.go:141] libmachine: Using API Version  1
	I0108 22:36:10.312148  380658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:10.312958  380658 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:10.313223  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetState
	I0108 22:36:10.316334  380658 main.go:141] libmachine: (newest-cni-154365) Calling .DriverName
	I0108 22:36:10.319158  380658 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:36:10.319187  380658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:36:10.319216  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHHostname
	I0108 22:36:10.323716  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:36:10.324362  380658 main.go:141] libmachine: (newest-cni-154365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:78:62", ip: ""} in network mk-newest-cni-154365: {Iface:virbr2 ExpiryTime:2024-01-08 23:35:23 +0000 UTC Type:0 Mac:52:54:00:a3:78:62 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:newest-cni-154365 Clientid:01:52:54:00:a3:78:62}
	I0108 22:36:10.324412  380658 main.go:141] libmachine: (newest-cni-154365) DBG | domain newest-cni-154365 has defined IP address 192.168.39.87 and MAC address 52:54:00:a3:78:62 in network mk-newest-cni-154365
	I0108 22:36:10.324754  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHPort
	I0108 22:36:10.325122  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHKeyPath
	I0108 22:36:10.325587  380658 main.go:141] libmachine: (newest-cni-154365) Calling .GetSSHUsername
	I0108 22:36:10.325817  380658 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/newest-cni-154365/id_rsa Username:docker}
	I0108 22:36:10.497974  380658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:36:10.504623  380658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:36:10.537412  380658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:36:10.829579  380658 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-154365" context rescaled to 1 replicas
	I0108 22:36:10.829647  380658 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:36:10.831794  380658 out.go:177] * Verifying Kubernetes components...
	I0108 22:36:10.834238  380658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:36:11.920826  380658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.422790505s)
	I0108 22:36:11.920892  380658 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:11.920871  380658 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.416195244s)
	I0108 22:36:11.920934  380658 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 22:36:11.920915  380658 main.go:141] libmachine: (newest-cni-154365) Calling .Close
	I0108 22:36:11.920967  380658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.38351985s)
	I0108 22:36:11.921016  380658 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:11.921026  380658 main.go:141] libmachine: (newest-cni-154365) Calling .Close
	I0108 22:36:11.921050  380658 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.086764228s)
	I0108 22:36:11.921299  380658 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:11.921320  380658 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:11.921332  380658 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:11.921342  380658 main.go:141] libmachine: (newest-cni-154365) Calling .Close
	I0108 22:36:11.923217  380658 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:36:11.923263  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Closing plugin on server side
	I0108 22:36:11.923415  380658 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:11.923436  380658 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:11.923458  380658 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:11.923481  380658 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:11.923490  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Closing plugin on server side
	I0108 22:36:11.923499  380658 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:11.923511  380658 main.go:141] libmachine: (newest-cni-154365) Calling .Close
	I0108 22:36:11.923549  380658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:36:11.923863  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Closing plugin on server side
	I0108 22:36:11.923947  380658 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:11.923966  380658 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:11.971908  380658 api_server.go:72] duration metric: took 1.14220685s to wait for apiserver process to appear ...
	I0108 22:36:11.971957  380658 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:36:11.971994  380658 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0108 22:36:11.975418  380658 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:11.975453  380658 main.go:141] libmachine: (newest-cni-154365) Calling .Close
	I0108 22:36:11.975870  380658 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:11.975895  380658 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:11.975930  380658 main.go:141] libmachine: (newest-cni-154365) DBG | Closing plugin on server side
	I0108 22:36:11.979146  380658 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 22:36:11.981019  380658 addons.go:508] enable addons completed in 1.773089869s: enabled=[storage-provisioner default-storageclass]
	I0108 22:36:12.011028  380658 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0108 22:36:12.013909  380658 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 22:36:12.013964  380658 api_server.go:131] duration metric: took 41.997057ms to wait for apiserver health ...
	I0108 22:36:12.013988  380658 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:36:12.034067  380658 system_pods.go:59] 8 kube-system pods found
	I0108 22:36:12.034116  380658 system_pods.go:61] "coredns-76f75df574-9gtmb" [c08f10f5-2e14-4f70-b22c-097e7a62d33e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:36:12.034128  380658 system_pods.go:61] "coredns-76f75df574-hxlf2" [3719b466-4021-4b80-bb35-5ad1178f2902] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 22:36:12.034144  380658 system_pods.go:61] "etcd-newest-cni-154365" [68a9dc57-d8ca-4f95-aded-3514170a651b] Running
	I0108 22:36:12.034156  380658 system_pods.go:61] "kube-apiserver-newest-cni-154365" [544f2bcb-1db8-46b7-94f6-ffc2c57453c4] Running
	I0108 22:36:12.034163  380658 system_pods.go:61] "kube-controller-manager-newest-cni-154365" [39164c7d-2066-4049-8874-99864bbb49b4] Running
	I0108 22:36:12.034171  380658 system_pods.go:61] "kube-proxy-g46jg" [b5e90927-6915-421c-ac3b-36f418a16083] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 22:36:12.034185  380658 system_pods.go:61] "kube-scheduler-newest-cni-154365" [77f7e2ee-88c0-4ae8-b235-d32563d3a4a0] Running
	I0108 22:36:12.034198  380658 system_pods.go:61] "storage-provisioner" [9a5f27ce-ab1d-4e5b-ac40-2194ffc3bf45] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:36:12.034212  380658 system_pods.go:74] duration metric: took 20.215421ms to wait for pod list to return data ...
	I0108 22:36:12.034227  380658 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:36:12.052241  380658 default_sa.go:45] found service account: "default"
	I0108 22:36:12.052303  380658 default_sa.go:55] duration metric: took 18.064123ms for default service account to be created ...
	I0108 22:36:12.052325  380658 kubeadm.go:581] duration metric: took 1.222638108s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0108 22:36:12.052355  380658 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:36:12.060330  380658 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:36:12.060371  380658 node_conditions.go:123] node cpu capacity is 2
	I0108 22:36:12.060392  380658 node_conditions.go:105] duration metric: took 8.016193ms to run NodePressure ...
	I0108 22:36:12.060408  380658 start.go:228] waiting for startup goroutines ...
	I0108 22:36:12.060417  380658 start.go:233] waiting for cluster config update ...
	I0108 22:36:12.060437  380658 start.go:242] writing updated cluster config ...
	I0108 22:36:12.060784  380658 ssh_runner.go:195] Run: rm -f paused
	I0108 22:36:12.130649  380658 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0108 22:36:12.133054  380658 out.go:177] * Done! kubectl is now configured to use "newest-cni-154365" cluster and "default" namespace by default
	I0108 22:36:08.369429  381248 main.go:141] libmachine: (auto-587823) Calling .GetIP
	I0108 22:36:08.372749  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:08.373228  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:08.373264  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:08.373571  381248 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0108 22:36:08.379690  381248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:36:08.397107  381248 localpath.go:92] copying /home/jenkins/minikube-integration/17866-334768/.minikube/client.crt -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/auto-587823/client.crt
	I0108 22:36:08.397289  381248 localpath.go:117] copying /home/jenkins/minikube-integration/17866-334768/.minikube/client.key -> /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/auto-587823/client.key
	I0108 22:36:08.397442  381248 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:36:08.397516  381248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:36:08.442049  381248 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:36:08.442133  381248 ssh_runner.go:195] Run: which lz4
	I0108 22:36:08.447573  381248 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:36:08.453855  381248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:36:08.453910  381248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:36:10.391966  381248 crio.go:444] Took 1.944448 seconds to copy over tarball
	I0108 22:36:10.392116  381248 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:15:41 UTC, ends at Mon 2024-01-08 22:36:15 UTC. --
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.672092363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4c979b20-2e9f-4d5c-b149-d133d4dfaa4c name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.674058518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e31daabc-5114-4236-a69e-278c76345012 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.674606377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753375674584178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e31daabc-5114-4236-a69e-278c76345012 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.675462142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8153b37b-14ba-4396-8832-6b789f5881d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.675519902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8153b37b-14ba-4396-8832-6b789f5881d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.675698584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131,PodSandboxId:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752479939620404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{io.kubernetes.container.hash: 312af6c7,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c,PodSandboxId:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752479229322417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,},Annotations:map[string]string{io.kubernetes.container.hash: d0629fb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11,PodSandboxId:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752478190581728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,},Annotations:map[string]string{io.kubernetes.container.hash: 2df9dc56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a,PodSandboxId:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752453709253427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 4b5b41db3bfd708974d709b20906a429,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7,PodSandboxId:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752453775227644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,},Annotations:
map[string]string{io.kubernetes.container.hash: 79124fc6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8,PodSandboxId:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752453218449993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f190eb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13,PodSandboxId:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752453031434921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5704fc2de7d01cdebc5c77e98b2033
d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8153b37b-14ba-4396-8832-6b789f5881d1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.739993711Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=945da298-b971-4b42-b027-507e90411ff6 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.740065959Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=945da298-b971-4b42-b027-507e90411ff6 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.742673390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ff2ad9fb-083a-4bf4-9271-93de4b634e9f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.743176124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753375743157488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ff2ad9fb-083a-4bf4-9271-93de4b634e9f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.745263610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b66f7e7-f3c6-4747-aa74-50aac4e28263 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.745344123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b66f7e7-f3c6-4747-aa74-50aac4e28263 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.745578968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131,PodSandboxId:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752479939620404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{io.kubernetes.container.hash: 312af6c7,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c,PodSandboxId:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752479229322417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,},Annotations:map[string]string{io.kubernetes.container.hash: d0629fb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11,PodSandboxId:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752478190581728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,},Annotations:map[string]string{io.kubernetes.container.hash: 2df9dc56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a,PodSandboxId:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752453709253427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 4b5b41db3bfd708974d709b20906a429,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7,PodSandboxId:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752453775227644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,},Annotations:
map[string]string{io.kubernetes.container.hash: 79124fc6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8,PodSandboxId:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752453218449993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f190eb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13,PodSandboxId:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752453031434921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5704fc2de7d01cdebc5c77e98b2033
d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b66f7e7-f3c6-4747-aa74-50aac4e28263 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.796859925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=29c23785-d021-447f-a140-5f37f14bb71d name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.797120236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=29c23785-d021-447f-a140-5f37f14bb71d name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.797462565Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=2c72b553-cc53-48fd-800e-191c83bc2ad2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.799831233Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=855086a8-831d-4947-8875-7b56800bb3a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.800655621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753375800613822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=855086a8-831d-4947-8875-7b56800bb3a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.801944396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=38e960a3-1761-4d9e-9760-ab50a0b1396c name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.802016252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=38e960a3-1761-4d9e-9760-ab50a0b1396c name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.802237342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131,PodSandboxId:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752479939620404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{io.kubernetes.container.hash: 312af6c7,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c,PodSandboxId:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752479229322417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,},Annotations:map[string]string{io.kubernetes.container.hash: d0629fb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11,PodSandboxId:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752478190581728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,},Annotations:map[string]string{io.kubernetes.container.hash: 2df9dc56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a,PodSandboxId:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752453709253427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 4b5b41db3bfd708974d709b20906a429,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7,PodSandboxId:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752453775227644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,},Annotations:
map[string]string{io.kubernetes.container.hash: 79124fc6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8,PodSandboxId:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752453218449993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f190eb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13,PodSandboxId:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752453031434921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5704fc2de7d01cdebc5c77e98b2033
d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=38e960a3-1761-4d9e-9760-ab50a0b1396c name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.801898497Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:949c6275-6836-4035-89f5-f2d2c2caaa89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752478996928842,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-08T22:21:18.657335738Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f509e054cc152ce088399f339d3e0dc8f083e4b817813cbfad7f09a97a98590,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-qhjlv,Uid:f1bff39b-c944-4de0-a5b8-eb239e91c6db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752478640422659,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-qhjlv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1bff39b-c944-4de0-a5b8-eb239e91c6d
b,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:21:18.293489919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jbz6n,Uid:562faf84-b986-4f0e-97cd-41aa5ac7ea17,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752476474942326,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:21:15.783303052Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&PodSandboxMetadata{Name:kube-proxy-hqj9b,Uid:14b3f3bd-1d65-4382-adc2-09
344b54463d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752476234169088,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T22:21:15.597097963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-903819,Uid:a5704fc2de7d01cdebc5c77e98b2033d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752452488211208,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: a5704fc2de7d01cdebc5c77e98b2033d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a5704fc2de7d01cdebc5c77e98b2033d,kubernetes.io/config.seen: 2024-01-08T22:20:51.938915783Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-903819,Uid:85c89db12549c8e4094a598a3e86a27a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752452476991195,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.132:8443,kubernetes.io/config.hash: 85c89db12549c8e4094a598a3e86a27a,kubernetes.io/config.seen: 2024-01-08T22:20:51.938913751Z,k
ubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-903819,Uid:4b5b41db3bfd708974d709b20906a429,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704752452440840297,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b5b41db3bfd708974d709b20906a429,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4b5b41db3bfd708974d709b20906a429,kubernetes.io/config.seen: 2024-01-08T22:20:51.938917022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-903819,Uid:085fa14de085a567626002de5792a237,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt
:1704752452435112550,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.132:2379,kubernetes.io/config.hash: 085fa14de085a567626002de5792a237,kubernetes.io/config.seen: 2024-01-08T22:20:51.938906335Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=2c72b553-cc53-48fd-800e-191c83bc2ad2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.805421813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=381340da-7a29-4396-9d47-16475858b48d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.805503430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=381340da-7a29-4396-9d47-16475858b48d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:15 embed-certs-903819 crio[740]: time="2024-01-08 22:36:15.805667317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131,PodSandboxId:58a23ae790192f099392717f8dda7610015879e3a74d29883250a964debb2102,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752479939620404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949c6275-6836-4035-89f5-f2d2c2caaa89,},Annotations:map[string]string{io.kubernetes.container.hash: 312af6c7,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c,PodSandboxId:945ca354713a9124230a2d76cc99b3ebc8d33594bc0dabf502c172d6e17157c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752479229322417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hqj9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14b3f3bd-1d65-4382-adc2-09344b54463d,},Annotations:map[string]string{io.kubernetes.container.hash: d0629fb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11,PodSandboxId:0daf08aef77cdece697970a1af0c93a070509a149c82059adc0be3c6131126d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752478190581728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jbz6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562faf84-b986-4f0e-97cd-41aa5ac7ea17,},Annotations:map[string]string{io.kubernetes.container.hash: 2df9dc56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a,PodSandboxId:8858ded44e8905682bc8d22badb10f40346bdb37613afddbe48ebdbd1f263046,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752453709253427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 4b5b41db3bfd708974d709b20906a429,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7,PodSandboxId:39fac5fc8ed4ceb0d7fbb46a3a29d881a6fd7659ae64e6922b48573adaa0545e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752453775227644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085fa14de085a567626002de5792a237,},Annotations:
map[string]string{io.kubernetes.container.hash: 79124fc6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8,PodSandboxId:3a4abfae5e370606ba281039af762a0f9f446bbf20723ebd048d6618432b82e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752453218449993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c89db12549c8e4094a598a3e86a27a,},Annotations:map[string
]string{io.kubernetes.container.hash: 1f190eb6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13,PodSandboxId:a9a2e4161dc761c653a8440fc3f89379312b674a75bd056ab8a637cc017aa2d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752453031434921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-903819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5704fc2de7d01cdebc5c77e98b2033
d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=381340da-7a29-4396-9d47-16475858b48d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10be43da68cf5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   58a23ae790192       storage-provisioner
	3d668e971bd86       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   945ca354713a9       kube-proxy-hqj9b
	9ae7848fe3ee0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   0daf08aef77cd       coredns-5dd5756b68-jbz6n
	c5c66b00d0275       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   39fac5fc8ed4c       etcd-embed-certs-903819
	5430b769556bb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   8858ded44e890       kube-scheduler-embed-certs-903819
	8e83b759c6cec       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   3a4abfae5e370       kube-apiserver-embed-certs-903819
	ceba1f5202ccd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   a9a2e4161dc76       kube-controller-manager-embed-certs-903819
	
	
	==> coredns [9ae7848fe3ee0f26dc59a5a600550b6e6c00f72d50f42927a22557e677698f11] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               embed-certs-903819
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-903819
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=embed-certs-903819
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_21_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:20:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-903819
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:36:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:31:37 +0000   Mon, 08 Jan 2024 22:20:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:31:37 +0000   Mon, 08 Jan 2024 22:20:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:31:37 +0000   Mon, 08 Jan 2024 22:20:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:31:37 +0000   Mon, 08 Jan 2024 22:21:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.132
	  Hostname:    embed-certs-903819
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f183ef036284b6e80008b87d0d3f30b
	  System UUID:                5f183ef0-3628-4b6e-8000-8b87d0d3f30b
	  Boot ID:                    bd1baecc-be37-4aa8-bd81-dd09855d135b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jbz6n                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-903819                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-903819             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-903819    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-hqj9b                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-903819             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-qhjlv               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-903819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-903819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-903819 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-903819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-903819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-903819 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-903819 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node embed-certs-903819 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-903819 event: Registered Node embed-certs-903819 in Controller
	
	
	==> dmesg <==
	[Jan 8 22:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074413] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.616771] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.855772] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.164640] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.588704] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.442888] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.126566] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.166256] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[  +0.114530] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +0.252458] systemd-fstab-generator[724]: Ignoring "noauto" for root device
	[Jan 8 22:16] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[ +22.475701] kauditd_printk_skb: 34 callbacks suppressed
	[Jan 8 22:20] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.669685] systemd-fstab-generator[3733]: Ignoring "noauto" for root device
	[Jan 8 22:21] systemd-fstab-generator[4059]: Ignoring "noauto" for root device
	
	
	==> etcd [c5c66b00d0275fbd4fb2f20449c03ec697baac522bd42302a6866d7311c4a4e7] <==
	{"level":"info","ts":"2024-01-08T22:20:56.870316Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a7da7c7e26779cb7","local-member-attributes":"{Name:embed-certs-903819 ClientURLs:[https://192.168.72.132:2379]}","request-path":"/0/members/a7da7c7e26779cb7/attributes","cluster-id":"146bd9643c3d2907","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T22:20:56.870333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:20:56.870489Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:56.871862Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"146bd9643c3d2907","local-member-id":"a7da7c7e26779cb7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:56.871986Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:56.872015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:20:56.872031Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:20:56.871991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T22:20:56.872686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T22:20:56.872799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T22:20:56.873098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.132:2379"}
	{"level":"info","ts":"2024-01-08T22:21:16.245627Z","caller":"traceutil/trace.go:171","msg":"trace[870548640] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"146.19177ms","start":"2024-01-08T22:21:16.099339Z","end":"2024-01-08T22:21:16.245531Z","steps":["trace[870548640] 'process raft request'  (duration: 85.502943ms)","trace[870548640] 'compare'  (duration: 46.062875ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:21:16.246111Z","caller":"traceutil/trace.go:171","msg":"trace[821539251] linearizableReadLoop","detail":"{readStateIndex:393; appliedIndex:391; }","duration":"139.245101ms","start":"2024-01-08T22:21:16.106833Z","end":"2024-01-08T22:21:16.246078Z","steps":["trace[821539251] 'read index received'  (duration: 3.70029ms)","trace[821539251] 'applied index is now lower than readState.Index'  (duration: 135.543206ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T22:21:16.247508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.662598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-01-08T22:21:16.247937Z","caller":"traceutil/trace.go:171","msg":"trace[115285580] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:385; }","duration":"141.148397ms","start":"2024-01-08T22:21:16.106767Z","end":"2024-01-08T22:21:16.247915Z","steps":["trace[115285580] 'agreement among raft nodes before linearized reading'  (duration: 140.588542ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:30:56.918645Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2024-01-08T22:30:56.922058Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":712,"took":"2.59318ms","hash":3077321258}
	{"level":"info","ts":"2024-01-08T22:30:56.922148Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3077321258,"revision":712,"compact-revision":-1}
	{"level":"info","ts":"2024-01-08T22:35:40.809778Z","caller":"traceutil/trace.go:171","msg":"trace[1020782548] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"105.825797ms","start":"2024-01-08T22:35:40.703808Z","end":"2024-01-08T22:35:40.809633Z","steps":["trace[1020782548] 'process raft request'  (duration: 105.583349ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:35:41.001975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.935594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.72.132\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-01-08T22:35:41.002229Z","caller":"traceutil/trace.go:171","msg":"trace[2006131427] range","detail":"{range_begin:/registry/masterleases/192.168.72.132; range_end:; response_count:1; response_revision:1185; }","duration":"152.289374ms","start":"2024-01-08T22:35:40.849905Z","end":"2024-01-08T22:35:41.002195Z","steps":["trace[2006131427] 'range keys from in-memory index tree'  (duration: 151.424151ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:35:41.20238Z","caller":"traceutil/trace.go:171","msg":"trace[1620587405] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"134.855234ms","start":"2024-01-08T22:35:41.067503Z","end":"2024-01-08T22:35:41.202358Z","steps":["trace[1620587405] 'process raft request'  (duration: 68.745186ms)","trace[1620587405] 'compare'  (duration: 65.97256ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:35:56.929173Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2024-01-08T22:35:56.931334Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":955,"took":"1.707855ms","hash":1541891700}
	{"level":"info","ts":"2024-01-08T22:35:56.931441Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1541891700,"revision":955,"compact-revision":712}
	
	
	==> kernel <==
	 22:36:16 up 20 min,  0 users,  load average: 0.04, 0.12, 0.16
	Linux embed-certs-903819 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8e83b759c6cecf58196b60e0bc96cdcee204b716c3b913e66e1d27310707e7e8] <==
	W0108 22:31:59.732334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:59.732564       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:31:59.732599       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:32:58.568386       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:33:58.568406       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:33:59.731960       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:33:59.732164       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:33:59.732193       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:33:59.733626       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:33:59.733678       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:33:59.733685       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:34:58.569309       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:35:58.568529       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:35:58.736110       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:35:58.736253       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:35:58.736873       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:35:59.736447       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:35:59.736556       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:35:59.736685       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:35:59.736447       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:35:59.736878       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:35:59.738864       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ceba1f5202ccd8ee4e7b436fca5091477a1e4abbc9131041fbad098218d0ef13] <==
	I0108 22:30:45.379303       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:14.869685       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:15.393146       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:44.878285       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:45.405271       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:32:14.888422       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:15.418190       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 22:32:19.039162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="260.028µs"
	I0108 22:32:30.042907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="186.078µs"
	E0108 22:32:44.898075       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:45.430805       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:33:14.905963       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:15.442081       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:33:44.912354       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:45.452568       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:14.927902       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:15.463577       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:44.935362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:45.475550       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:35:14.943283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:35:15.488251       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:35:44.951314       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:35:45.502600       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:36:14.960113       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:36:15.517231       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3d668e971bd869b582cc5b5ec0b2c93feea2a90671931d560a3417b4293b352c] <==
	I0108 22:21:19.878063       1 server_others.go:69] "Using iptables proxy"
	I0108 22:21:19.918874       1 node.go:141] Successfully retrieved node IP: 192.168.72.132
	I0108 22:21:20.033252       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 22:21:20.033347       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:21:20.039156       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:21:20.040672       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:21:20.041863       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:21:20.042065       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:21:20.047171       1 config.go:188] "Starting service config controller"
	I0108 22:21:20.053451       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:21:20.055633       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:21:20.055844       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:21:20.060091       1 config.go:315] "Starting node config controller"
	I0108 22:21:20.060207       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:21:20.157248       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:21:20.160841       1 shared_informer.go:318] Caches are synced for node config
	I0108 22:21:20.161257       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5430b769556bb48d5052c2520d29f250eb11b870391353a5029992b6d916c93a] <==
	W0108 22:20:59.666521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:20:59.666585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:20:59.666639       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:20:59.666647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:20:59.713099       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:20:59.713161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:20:59.813168       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:20:59.813266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:20:59.870865       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:20:59.871015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:21:00.027023       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:00.027120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:00.032902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:00.033015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:00.095000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:00.095160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:00.135271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:00.135362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:00.193816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:21:00.193977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:21:00.240204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:21:00.240280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 22:21:00.316643       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:21:00.316786       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 22:21:02.150181       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:15:41 UTC, ends at Mon 2024-01-08 22:36:16 UTC. --
	Jan 08 22:33:40 embed-certs-903819 kubelet[4066]: E0108 22:33:40.015356    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:33:53 embed-certs-903819 kubelet[4066]: E0108 22:33:53.014569    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:34:03 embed-certs-903819 kubelet[4066]: E0108 22:34:03.151056    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:34:03 embed-certs-903819 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:34:03 embed-certs-903819 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:34:03 embed-certs-903819 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:34:08 embed-certs-903819 kubelet[4066]: E0108 22:34:08.015013    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:34:20 embed-certs-903819 kubelet[4066]: E0108 22:34:20.016182    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:34:31 embed-certs-903819 kubelet[4066]: E0108 22:34:31.015028    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:34:42 embed-certs-903819 kubelet[4066]: E0108 22:34:42.015517    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:34:54 embed-certs-903819 kubelet[4066]: E0108 22:34:54.015592    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:35:03 embed-certs-903819 kubelet[4066]: E0108 22:35:03.161827    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:35:03 embed-certs-903819 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:35:03 embed-certs-903819 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:35:03 embed-certs-903819 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:35:08 embed-certs-903819 kubelet[4066]: E0108 22:35:08.014605    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:35:21 embed-certs-903819 kubelet[4066]: E0108 22:35:21.016612    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:35:36 embed-certs-903819 kubelet[4066]: E0108 22:35:36.016492    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:35:51 embed-certs-903819 kubelet[4066]: E0108 22:35:51.014858    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	Jan 08 22:36:03 embed-certs-903819 kubelet[4066]: E0108 22:36:03.152698    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:36:03 embed-certs-903819 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:36:03 embed-certs-903819 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:36:03 embed-certs-903819 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:36:03 embed-certs-903819 kubelet[4066]: E0108 22:36:03.176217    4066 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 08 22:36:05 embed-certs-903819 kubelet[4066]: E0108 22:36:05.015218    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qhjlv" podUID="f1bff39b-c944-4de0-a5b8-eb239e91c6db"
	
	
	==> storage-provisioner [10be43da68cf518bd18831745fa6dd7ed705296cfd6e4874fac6ef8f4067c131] <==
	I0108 22:21:20.096970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:21:20.121619       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:21:20.122012       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:21:20.143261       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:21:20.145412       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"769a2d5e-78de-4bba-b7b8-4f926749b3f6", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-903819_f92207b3-5c18-4530-b46a-4c83fce84323 became leader
	I0108 22:21:20.145603       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-903819_f92207b3-5c18-4530-b46a-4c83fce84323!
	I0108 22:21:20.252696       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-903819_f92207b3-5c18-4530-b46a-4c83fce84323!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-903819 -n embed-certs-903819
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-903819 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qhjlv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-903819 describe pod metrics-server-57f55c9bc5-qhjlv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-903819 describe pod metrics-server-57f55c9bc5-qhjlv: exit status 1 (75.534295ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qhjlv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-903819 describe pod metrics-server-57f55c9bc5-qhjlv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (100.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (80.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-08 22:36:48.308934634 +0000 UTC m=+5684.010056974
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-292054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-292054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.019µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-292054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-292054 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-292054 logs -n 25: (2.167521756s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:06 UTC | 08 Jan 24 22:09 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079759        | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC | 08 Jan 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-675668             | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-903819            | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC | 08 Jan 24 22:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-292054  | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC | 08 Jan 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:09 UTC |                     |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079759             | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC | 08 Jan 24 22:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-675668                  | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-903819                 | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-292054       | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-292054 | jenkins | v1.32.0 | 08 Jan 24 22:11 UTC | 08 Jan 24 22:26 UTC |
	|         | default-k8s-diff-port-292054                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-079759                              | old-k8s-version-079759       | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	| start   | -p newest-cni-154365 --memory=2200 --alsologtostderr   | newest-cni-154365            | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:36 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-675668                                   | no-preload-675668            | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	| start   | -p auto-587823 --memory=3072                           | auto-587823                  | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-154365             | newest-cni-154365            | jenkins | v1.32.0 | 08 Jan 24 22:36 UTC | 08 Jan 24 22:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-154365                                   | newest-cni-154365            | jenkins | v1.32.0 | 08 Jan 24 22:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-903819                                  | embed-certs-903819           | jenkins | v1.32.0 | 08 Jan 24 22:36 UTC | 08 Jan 24 22:36 UTC |
	| start   | -p kindnet-587823                                      | kindnet-587823               | jenkins | v1.32.0 | 08 Jan 24 22:36 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:36:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:36:18.710145  381834 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:36:18.710293  381834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:36:18.710302  381834 out.go:309] Setting ErrFile to fd 2...
	I0108 22:36:18.710307  381834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:36:18.710521  381834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:36:18.711136  381834 out.go:303] Setting JSON to false
	I0108 22:36:18.712367  381834 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11905,"bootTime":1704741474,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:36:18.712469  381834 start.go:138] virtualization: kvm guest
	I0108 22:36:18.716231  381834 out.go:177] * [kindnet-587823] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:36:18.718108  381834 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:36:18.718176  381834 notify.go:220] Checking for updates...
	I0108 22:36:18.719708  381834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:36:18.721543  381834 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:36:18.723108  381834 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:36:18.724532  381834 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:36:18.727612  381834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:36:17.828574  381248 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:36:18.074435  381248 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:36:18.124325  381248 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:36:18.124398  381248 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:36:18.324166  381248 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:36:18.394799  381248 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:36:18.602934  381248 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:36:18.754372  381248 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:36:18.757870  381248 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:36:18.760038  381248 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:36:18.729994  381834 config.go:182] Loaded profile config "auto-587823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:36:18.730151  381834 config.go:182] Loaded profile config "default-k8s-diff-port-292054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:36:18.730294  381834 config.go:182] Loaded profile config "newest-cni-154365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:36:18.730414  381834 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:36:18.772363  381834 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 22:36:18.773605  381834 start.go:298] selected driver: kvm2
	I0108 22:36:18.773619  381834 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:36:18.773659  381834 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:36:18.774711  381834 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:36:18.774840  381834 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:36:18.796791  381834 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:36:18.796885  381834 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 22:36:18.797190  381834 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:36:18.797240  381834 cni.go:84] Creating CNI manager for "kindnet"
	I0108 22:36:18.797251  381834 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 22:36:18.797261  381834 start_flags.go:321] config:
	{Name:kindnet-587823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-587823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:36:18.797447  381834 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:36:18.800630  381834 out.go:177] * Starting control plane node kindnet-587823 in cluster kindnet-587823
	I0108 22:36:18.802104  381834 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:36:18.802182  381834 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:36:18.802202  381834 cache.go:56] Caching tarball of preloaded images
	I0108 22:36:18.802350  381834 preload.go:174] Found /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:36:18.802369  381834 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:36:18.802516  381834 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/kindnet-587823/config.json ...
	I0108 22:36:18.802543  381834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/kindnet-587823/config.json: {Name:mk45e7a0c853c079b9fa9f1327de81c8227c1cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:36:18.802744  381834 start.go:365] acquiring machines lock for kindnet-587823: {Name:mke11f05e00082dc47df43d3fbf80ed0f3c55335 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:36:18.802798  381834 start.go:369] acquired machines lock for "kindnet-587823" in 31.819µs
	I0108 22:36:18.802826  381834 start.go:93] Provisioning new machine with config: &{Name:kindnet-587823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:kindnet-587823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:36:18.802957  381834 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 22:36:18.761792  381248 out.go:204]   - Booting up control plane ...
	I0108 22:36:18.761933  381248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:36:18.762500  381248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:36:18.763489  381248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:36:18.783229  381248 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:36:18.785344  381248 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:36:18.785425  381248 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:36:18.952947  381248 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:36:18.804764  381834 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0108 22:36:18.804935  381834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:18.804977  381834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:18.822859  381834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43295
	I0108 22:36:18.823509  381834 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:18.824222  381834 main.go:141] libmachine: Using API Version  1
	I0108 22:36:18.824251  381834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:18.824648  381834 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:18.824879  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetMachineName
	I0108 22:36:18.825046  381834 main.go:141] libmachine: (kindnet-587823) Calling .DriverName
	I0108 22:36:18.825182  381834 start.go:159] libmachine.API.Create for "kindnet-587823" (driver="kvm2")
	I0108 22:36:18.825209  381834 client.go:168] LocalClient.Create starting
	I0108 22:36:18.825239  381834 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem
	I0108 22:36:18.825281  381834 main.go:141] libmachine: Decoding PEM data...
	I0108 22:36:18.825305  381834 main.go:141] libmachine: Parsing certificate...
	I0108 22:36:18.825388  381834 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem
	I0108 22:36:18.825418  381834 main.go:141] libmachine: Decoding PEM data...
	I0108 22:36:18.825436  381834 main.go:141] libmachine: Parsing certificate...
	I0108 22:36:18.825460  381834 main.go:141] libmachine: Running pre-create checks...
	I0108 22:36:18.825471  381834 main.go:141] libmachine: (kindnet-587823) Calling .PreCreateCheck
	I0108 22:36:18.825806  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetConfigRaw
	I0108 22:36:18.826260  381834 main.go:141] libmachine: Creating machine...
	I0108 22:36:18.826278  381834 main.go:141] libmachine: (kindnet-587823) Calling .Create
	I0108 22:36:18.826406  381834 main.go:141] libmachine: (kindnet-587823) Creating KVM machine...
	I0108 22:36:18.827936  381834 main.go:141] libmachine: (kindnet-587823) DBG | found existing default KVM network
	I0108 22:36:18.829948  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:18.829682  381856 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:04:a2} reservation:<nil>}
	I0108 22:36:18.831129  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:18.831033  381856 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c1:31:02} reservation:<nil>}
	I0108 22:36:18.832558  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:18.832446  381856 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:34:5b:ec} reservation:<nil>}
	I0108 22:36:18.833827  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:18.833757  381856 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002bd920}
	I0108 22:36:18.840901  381834 main.go:141] libmachine: (kindnet-587823) DBG | trying to create private KVM network mk-kindnet-587823 192.168.72.0/24...
	I0108 22:36:18.937628  381834 main.go:141] libmachine: (kindnet-587823) DBG | private KVM network mk-kindnet-587823 192.168.72.0/24 created
	I0108 22:36:18.937665  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:18.937550  381856 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:36:18.937675  381834 main.go:141] libmachine: (kindnet-587823) Setting up store path in /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823 ...
	I0108 22:36:18.937694  381834 main.go:141] libmachine: (kindnet-587823) Building disk image from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 22:36:18.937711  381834 main.go:141] libmachine: (kindnet-587823) Downloading /home/jenkins/minikube-integration/17866-334768/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 22:36:19.190660  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:19.190511  381856 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa...
	I0108 22:36:19.290457  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:19.290300  381856 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/kindnet-587823.rawdisk...
	I0108 22:36:19.290488  381834 main.go:141] libmachine: (kindnet-587823) DBG | Writing magic tar header
	I0108 22:36:19.290507  381834 main.go:141] libmachine: (kindnet-587823) DBG | Writing SSH key tar header
	I0108 22:36:19.290522  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:19.290420  381856 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823 ...
	I0108 22:36:19.290538  381834 main.go:141] libmachine: (kindnet-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823
	I0108 22:36:19.290650  381834 main.go:141] libmachine: (kindnet-587823) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823 (perms=drwx------)
	I0108 22:36:19.290686  381834 main.go:141] libmachine: (kindnet-587823) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube/machines (perms=drwxr-xr-x)
	I0108 22:36:19.290699  381834 main.go:141] libmachine: (kindnet-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube/machines
	I0108 22:36:19.290717  381834 main.go:141] libmachine: (kindnet-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:36:19.290730  381834 main.go:141] libmachine: (kindnet-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-334768
	I0108 22:36:19.290748  381834 main.go:141] libmachine: (kindnet-587823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 22:36:19.290760  381834 main.go:141] libmachine: (kindnet-587823) DBG | Checking permissions on dir: /home/jenkins
	I0108 22:36:19.290774  381834 main.go:141] libmachine: (kindnet-587823) DBG | Checking permissions on dir: /home
	I0108 22:36:19.290786  381834 main.go:141] libmachine: (kindnet-587823) DBG | Skipping /home - not owner
	I0108 22:36:19.290799  381834 main.go:141] libmachine: (kindnet-587823) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768/.minikube (perms=drwxr-xr-x)
	I0108 22:36:19.290813  381834 main.go:141] libmachine: (kindnet-587823) Setting executable bit set on /home/jenkins/minikube-integration/17866-334768 (perms=drwxrwxr-x)
	I0108 22:36:19.290837  381834 main.go:141] libmachine: (kindnet-587823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 22:36:19.290850  381834 main.go:141] libmachine: (kindnet-587823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 22:36:19.290860  381834 main.go:141] libmachine: (kindnet-587823) Creating domain...
	I0108 22:36:19.291927  381834 main.go:141] libmachine: (kindnet-587823) define libvirt domain using xml: 
	I0108 22:36:19.291956  381834 main.go:141] libmachine: (kindnet-587823) <domain type='kvm'>
	I0108 22:36:19.291969  381834 main.go:141] libmachine: (kindnet-587823)   <name>kindnet-587823</name>
	I0108 22:36:19.291980  381834 main.go:141] libmachine: (kindnet-587823)   <memory unit='MiB'>3072</memory>
	I0108 22:36:19.291990  381834 main.go:141] libmachine: (kindnet-587823)   <vcpu>2</vcpu>
	I0108 22:36:19.291998  381834 main.go:141] libmachine: (kindnet-587823)   <features>
	I0108 22:36:19.292010  381834 main.go:141] libmachine: (kindnet-587823)     <acpi/>
	I0108 22:36:19.292022  381834 main.go:141] libmachine: (kindnet-587823)     <apic/>
	I0108 22:36:19.292034  381834 main.go:141] libmachine: (kindnet-587823)     <pae/>
	I0108 22:36:19.292045  381834 main.go:141] libmachine: (kindnet-587823)     
	I0108 22:36:19.292083  381834 main.go:141] libmachine: (kindnet-587823)   </features>
	I0108 22:36:19.292111  381834 main.go:141] libmachine: (kindnet-587823)   <cpu mode='host-passthrough'>
	I0108 22:36:19.292127  381834 main.go:141] libmachine: (kindnet-587823)   
	I0108 22:36:19.292139  381834 main.go:141] libmachine: (kindnet-587823)   </cpu>
	I0108 22:36:19.292151  381834 main.go:141] libmachine: (kindnet-587823)   <os>
	I0108 22:36:19.292163  381834 main.go:141] libmachine: (kindnet-587823)     <type>hvm</type>
	I0108 22:36:19.292176  381834 main.go:141] libmachine: (kindnet-587823)     <boot dev='cdrom'/>
	I0108 22:36:19.292189  381834 main.go:141] libmachine: (kindnet-587823)     <boot dev='hd'/>
	I0108 22:36:19.292207  381834 main.go:141] libmachine: (kindnet-587823)     <bootmenu enable='no'/>
	I0108 22:36:19.292224  381834 main.go:141] libmachine: (kindnet-587823)   </os>
	I0108 22:36:19.292234  381834 main.go:141] libmachine: (kindnet-587823)   <devices>
	I0108 22:36:19.292246  381834 main.go:141] libmachine: (kindnet-587823)     <disk type='file' device='cdrom'>
	I0108 22:36:19.292265  381834 main.go:141] libmachine: (kindnet-587823)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/boot2docker.iso'/>
	I0108 22:36:19.292282  381834 main.go:141] libmachine: (kindnet-587823)       <target dev='hdc' bus='scsi'/>
	I0108 22:36:19.292299  381834 main.go:141] libmachine: (kindnet-587823)       <readonly/>
	I0108 22:36:19.292314  381834 main.go:141] libmachine: (kindnet-587823)     </disk>
	I0108 22:36:19.292326  381834 main.go:141] libmachine: (kindnet-587823)     <disk type='file' device='disk'>
	I0108 22:36:19.292341  381834 main.go:141] libmachine: (kindnet-587823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 22:36:19.292359  381834 main.go:141] libmachine: (kindnet-587823)       <source file='/home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/kindnet-587823.rawdisk'/>
	I0108 22:36:19.292384  381834 main.go:141] libmachine: (kindnet-587823)       <target dev='hda' bus='virtio'/>
	I0108 22:36:19.292401  381834 main.go:141] libmachine: (kindnet-587823)     </disk>
	I0108 22:36:19.292416  381834 main.go:141] libmachine: (kindnet-587823)     <interface type='network'>
	I0108 22:36:19.292429  381834 main.go:141] libmachine: (kindnet-587823)       <source network='mk-kindnet-587823'/>
	I0108 22:36:19.292442  381834 main.go:141] libmachine: (kindnet-587823)       <model type='virtio'/>
	I0108 22:36:19.292451  381834 main.go:141] libmachine: (kindnet-587823)     </interface>
	I0108 22:36:19.292462  381834 main.go:141] libmachine: (kindnet-587823)     <interface type='network'>
	I0108 22:36:19.292477  381834 main.go:141] libmachine: (kindnet-587823)       <source network='default'/>
	I0108 22:36:19.292506  381834 main.go:141] libmachine: (kindnet-587823)       <model type='virtio'/>
	I0108 22:36:19.292525  381834 main.go:141] libmachine: (kindnet-587823)     </interface>
	I0108 22:36:19.292535  381834 main.go:141] libmachine: (kindnet-587823)     <serial type='pty'>
	I0108 22:36:19.292549  381834 main.go:141] libmachine: (kindnet-587823)       <target port='0'/>
	I0108 22:36:19.292558  381834 main.go:141] libmachine: (kindnet-587823)     </serial>
	I0108 22:36:19.292567  381834 main.go:141] libmachine: (kindnet-587823)     <console type='pty'>
	I0108 22:36:19.292578  381834 main.go:141] libmachine: (kindnet-587823)       <target type='serial' port='0'/>
	I0108 22:36:19.292603  381834 main.go:141] libmachine: (kindnet-587823)     </console>
	I0108 22:36:19.292616  381834 main.go:141] libmachine: (kindnet-587823)     <rng model='virtio'>
	I0108 22:36:19.292644  381834 main.go:141] libmachine: (kindnet-587823)       <backend model='random'>/dev/random</backend>
	I0108 22:36:19.292666  381834 main.go:141] libmachine: (kindnet-587823)     </rng>
	I0108 22:36:19.292680  381834 main.go:141] libmachine: (kindnet-587823)     
	I0108 22:36:19.292692  381834 main.go:141] libmachine: (kindnet-587823)     
	I0108 22:36:19.292706  381834 main.go:141] libmachine: (kindnet-587823)   </devices>
	I0108 22:36:19.292719  381834 main.go:141] libmachine: (kindnet-587823) </domain>
	I0108 22:36:19.292744  381834 main.go:141] libmachine: (kindnet-587823) 
	I0108 22:36:19.298117  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:a1:d2:a1 in network default
	I0108 22:36:19.298968  381834 main.go:141] libmachine: (kindnet-587823) Ensuring networks are active...
	I0108 22:36:19.298998  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:19.300044  381834 main.go:141] libmachine: (kindnet-587823) Ensuring network default is active
	I0108 22:36:19.300540  381834 main.go:141] libmachine: (kindnet-587823) Ensuring network mk-kindnet-587823 is active
	I0108 22:36:19.301146  381834 main.go:141] libmachine: (kindnet-587823) Getting domain xml...
	I0108 22:36:19.302023  381834 main.go:141] libmachine: (kindnet-587823) Creating domain...
	I0108 22:36:20.702956  381834 main.go:141] libmachine: (kindnet-587823) Waiting to get IP...
	I0108 22:36:20.704117  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:20.704652  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:20.704684  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:20.704613  381856 retry.go:31] will retry after 219.438725ms: waiting for machine to come up
	I0108 22:36:20.926245  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:20.927128  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:20.927180  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:20.927032  381856 retry.go:31] will retry after 309.282461ms: waiting for machine to come up
	I0108 22:36:21.237869  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:21.238497  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:21.238527  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:21.238415  381856 retry.go:31] will retry after 364.455917ms: waiting for machine to come up
	I0108 22:36:21.604703  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:21.605341  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:21.605382  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:21.605279  381856 retry.go:31] will retry after 457.906709ms: waiting for machine to come up
	I0108 22:36:22.065199  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:22.065887  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:22.065927  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:22.065813  381856 retry.go:31] will retry after 508.40431ms: waiting for machine to come up
	I0108 22:36:22.575635  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:22.576268  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:22.576304  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:22.576213  381856 retry.go:31] will retry after 593.476694ms: waiting for machine to come up
	I0108 22:36:23.171902  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:23.172432  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:23.172467  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:23.172374  381856 retry.go:31] will retry after 781.296841ms: waiting for machine to come up
	I0108 22:36:23.955177  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:23.955777  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:23.955817  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:23.955720  381856 retry.go:31] will retry after 1.401224024s: waiting for machine to come up
	I0108 22:36:25.358426  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:25.358954  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:25.358995  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:25.358899  381856 retry.go:31] will retry after 1.630872346s: waiting for machine to come up
	I0108 22:36:26.991849  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:26.992412  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:26.992458  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:26.992378  381856 retry.go:31] will retry after 1.571697153s: waiting for machine to come up
	I0108 22:36:28.566532  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:28.567177  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:28.567202  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:28.567114  381856 retry.go:31] will retry after 2.215654145s: waiting for machine to come up
	I0108 22:36:28.454571  381248 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503117 seconds
	I0108 22:36:28.454771  381248 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:36:28.533969  381248 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:36:29.106205  381248 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:36:29.106454  381248 kubeadm.go:322] [mark-control-plane] Marking the node auto-587823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:36:29.626757  381248 kubeadm.go:322] [bootstrap-token] Using token: 1ddycb.5ezbehimut2zxwcc
	I0108 22:36:29.629637  381248 out.go:204]   - Configuring RBAC rules ...
	I0108 22:36:29.629804  381248 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:36:29.641505  381248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:36:29.655623  381248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:36:29.666878  381248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:36:29.673570  381248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:36:29.689448  381248 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:36:29.712399  381248 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:36:30.051006  381248 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:36:30.126359  381248 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:36:30.127834  381248 kubeadm.go:322] 
	I0108 22:36:30.127933  381248 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:36:30.127944  381248 kubeadm.go:322] 
	I0108 22:36:30.128037  381248 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:36:30.128047  381248 kubeadm.go:322] 
	I0108 22:36:30.128103  381248 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:36:30.128178  381248 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:36:30.128254  381248 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:36:30.128278  381248 kubeadm.go:322] 
	I0108 22:36:30.128353  381248 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:36:30.128364  381248 kubeadm.go:322] 
	I0108 22:36:30.128434  381248 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:36:30.128441  381248 kubeadm.go:322] 
	I0108 22:36:30.128483  381248 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:36:30.128545  381248 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:36:30.128647  381248 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:36:30.128658  381248 kubeadm.go:322] 
	I0108 22:36:30.128795  381248 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:36:30.128897  381248 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:36:30.128904  381248 kubeadm.go:322] 
	I0108 22:36:30.129003  381248 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1ddycb.5ezbehimut2zxwcc \
	I0108 22:36:30.129160  381248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 \
	I0108 22:36:30.129199  381248 kubeadm.go:322] 	--control-plane 
	I0108 22:36:30.129210  381248 kubeadm.go:322] 
	I0108 22:36:30.129318  381248 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:36:30.129328  381248 kubeadm.go:322] 
	I0108 22:36:30.129429  381248 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1ddycb.5ezbehimut2zxwcc \
	I0108 22:36:30.129546  381248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5592636d451b5d05d3ab28f9eb708eba45caa01667aa5f4fe62709a9020b0487 
	I0108 22:36:30.130144  381248 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:36:30.130174  381248 cni.go:84] Creating CNI manager for ""
	I0108 22:36:30.130187  381248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:36:30.132597  381248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:36:30.134432  381248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:36:30.160825  381248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:36:30.208577  381248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:36:30.208667  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:30.208667  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=auto-587823 minikube.k8s.io/updated_at=2024_01_08T22_36_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:30.276290  381248 ops.go:34] apiserver oom_adj: -16
	I0108 22:36:30.635568  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:31.135752  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:31.636761  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:32.136118  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:32.636092  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:30.784452  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:30.785030  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:30.785062  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:30.784949  381856 retry.go:31] will retry after 3.276406054s: waiting for machine to come up
	I0108 22:36:33.136671  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:33.636290  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:34.135591  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:34.635759  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:35.136495  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:35.636663  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:36.136641  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:36.636602  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:37.136675  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:37.636381  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:34.063757  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:34.064356  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:34.064389  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:34.064302  381856 retry.go:31] will retry after 4.47642605s: waiting for machine to come up
	I0108 22:36:38.544943  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:38.545460  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find current IP address of domain kindnet-587823 in network mk-kindnet-587823
	I0108 22:36:38.545489  381834 main.go:141] libmachine: (kindnet-587823) DBG | I0108 22:36:38.545403  381856 retry.go:31] will retry after 4.888751999s: waiting for machine to come up
	I0108 22:36:38.136682  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:38.636602  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:39.136137  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:39.636243  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:40.135985  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:40.635681  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:41.136375  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:41.636014  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:42.136649  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:42.636594  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:43.136527  381248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:36:43.325629  381248 kubeadm.go:1088] duration metric: took 13.117035983s to wait for elevateKubeSystemPrivileges.
	I0108 22:36:43.325679  381248 kubeadm.go:406] StartCluster complete in 27.680233125s
	I0108 22:36:43.325708  381248 settings.go:142] acquiring lock: {Name:mk62e66af58d2c8a061c2ef50ef0985e83e1ddeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:36:43.325815  381248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:36:43.328074  381248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-334768/kubeconfig: {Name:mk0abf4b1e037e15154f04c2b9a4884980455d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:36:43.328473  381248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:36:43.328623  381248 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:36:43.328744  381248 addons.go:69] Setting storage-provisioner=true in profile "auto-587823"
	I0108 22:36:43.328794  381248 addons.go:237] Setting addon storage-provisioner=true in "auto-587823"
	I0108 22:36:43.328806  381248 addons.go:69] Setting default-storageclass=true in profile "auto-587823"
	I0108 22:36:43.328834  381248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-587823"
	I0108 22:36:43.328877  381248 host.go:66] Checking if "auto-587823" exists ...
	I0108 22:36:43.328791  381248 config.go:182] Loaded profile config "auto-587823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:36:43.329417  381248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:43.329417  381248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:43.329474  381248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:43.329492  381248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:43.347859  381248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
	I0108 22:36:43.348392  381248 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:43.349010  381248 main.go:141] libmachine: Using API Version  1
	I0108 22:36:43.349046  381248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:43.349512  381248 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:43.349811  381248 main.go:141] libmachine: (auto-587823) Calling .GetState
	I0108 22:36:43.350508  381248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35649
	I0108 22:36:43.351002  381248 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:43.354356  381248 main.go:141] libmachine: Using API Version  1
	I0108 22:36:43.354387  381248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:43.354806  381248 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:43.355579  381248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:43.355648  381248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:43.355715  381248 addons.go:237] Setting addon default-storageclass=true in "auto-587823"
	I0108 22:36:43.355860  381248 host.go:66] Checking if "auto-587823" exists ...
	I0108 22:36:43.356294  381248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:43.356337  381248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:43.374939  381248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0108 22:36:43.375450  381248 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:43.375987  381248 main.go:141] libmachine: Using API Version  1
	I0108 22:36:43.376023  381248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:43.376411  381248 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:43.376705  381248 main.go:141] libmachine: (auto-587823) Calling .GetState
	I0108 22:36:43.377881  381248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0108 22:36:43.378402  381248 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:43.379058  381248 main.go:141] libmachine: Using API Version  1
	I0108 22:36:43.379080  381248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:43.379164  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:43.381578  381248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:36:43.436783  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:43.437363  381834 main.go:141] libmachine: (kindnet-587823) Found IP for machine: 192.168.72.236
	I0108 22:36:43.437394  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has current primary IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:43.437441  381834 main.go:141] libmachine: (kindnet-587823) Reserving static IP address...
	I0108 22:36:43.437726  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find host DHCP lease matching {name: "kindnet-587823", mac: "52:54:00:21:f2:59", ip: "192.168.72.236"} in network mk-kindnet-587823
	I0108 22:36:43.546917  381834 main.go:141] libmachine: (kindnet-587823) DBG | Getting to WaitForSSH function...
	I0108 22:36:43.546952  381834 main.go:141] libmachine: (kindnet-587823) Reserved static IP address: 192.168.72.236
	I0108 22:36:43.546966  381834 main.go:141] libmachine: (kindnet-587823) Waiting for SSH to be available...
	I0108 22:36:43.550319  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:43.550675  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823
	I0108 22:36:43.550708  381834 main.go:141] libmachine: (kindnet-587823) DBG | unable to find defined IP address of network mk-kindnet-587823 interface with MAC address 52:54:00:21:f2:59
	I0108 22:36:43.550893  381834 main.go:141] libmachine: (kindnet-587823) DBG | Using SSH client type: external
	I0108 22:36:43.550937  381834 main.go:141] libmachine: (kindnet-587823) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa (-rw-------)
	I0108 22:36:43.550997  381834 main.go:141] libmachine: (kindnet-587823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:36:43.551033  381834 main.go:141] libmachine: (kindnet-587823) DBG | About to run SSH command:
	I0108 22:36:43.551052  381834 main.go:141] libmachine: (kindnet-587823) DBG | exit 0
	I0108 22:36:43.555183  381834 main.go:141] libmachine: (kindnet-587823) DBG | SSH cmd err, output: exit status 255: 
	I0108 22:36:43.555218  381834 main.go:141] libmachine: (kindnet-587823) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0108 22:36:43.555229  381834 main.go:141] libmachine: (kindnet-587823) DBG | command : exit 0
	I0108 22:36:43.555249  381834 main.go:141] libmachine: (kindnet-587823) DBG | err     : exit status 255
	I0108 22:36:43.555262  381834 main.go:141] libmachine: (kindnet-587823) DBG | output  : 
	I0108 22:36:43.379731  381248 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:43.383287  381248 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:36:43.383317  381248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:36:43.383341  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:43.383862  381248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:43.383905  381248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:43.386829  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:43.387307  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:43.387333  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:43.387586  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:43.387799  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:43.388260  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:43.388445  381248 sshutil.go:53] new ssh client: &{IP:192.168.61.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa Username:docker}
	I0108 22:36:43.401625  381248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41973
	I0108 22:36:43.402195  381248 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:43.402793  381248 main.go:141] libmachine: Using API Version  1
	I0108 22:36:43.402814  381248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:43.403165  381248 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:43.403377  381248 main.go:141] libmachine: (auto-587823) Calling .GetState
	I0108 22:36:43.405004  381248 main.go:141] libmachine: (auto-587823) Calling .DriverName
	I0108 22:36:43.405321  381248 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:36:43.405341  381248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:36:43.405363  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHHostname
	I0108 22:36:43.408220  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:43.408558  381248 main.go:141] libmachine: (auto-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:74:93", ip: ""} in network mk-auto-587823: {Iface:virbr3 ExpiryTime:2024-01-08 23:35:52 +0000 UTC Type:0 Mac:52:54:00:59:74:93 Iaid: IPaddr:192.168.61.208 Prefix:24 Hostname:auto-587823 Clientid:01:52:54:00:59:74:93}
	I0108 22:36:43.408587  381248 main.go:141] libmachine: (auto-587823) DBG | domain auto-587823 has defined IP address 192.168.61.208 and MAC address 52:54:00:59:74:93 in network mk-auto-587823
	I0108 22:36:43.408816  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHPort
	I0108 22:36:43.409090  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHKeyPath
	I0108 22:36:43.409255  381248 main.go:141] libmachine: (auto-587823) Calling .GetSSHUsername
	I0108 22:36:43.409464  381248 sshutil.go:53] new ssh client: &{IP:192.168.61.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/auto-587823/id_rsa Username:docker}
	I0108 22:36:43.582287  381248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:36:43.584063  381248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:36:43.769844  381248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:36:44.004425  381248 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-587823" context rescaled to 1 replicas
	I0108 22:36:44.004505  381248 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:36:44.007001  381248 out.go:177] * Verifying Kubernetes components...
	I0108 22:36:44.008807  381248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:36:45.202315  381248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.619981018s)
	I0108 22:36:45.202389  381248 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:45.202403  381248 main.go:141] libmachine: (auto-587823) Calling .Close
	I0108 22:36:45.202783  381248 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:45.202806  381248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:45.202841  381248 main.go:141] libmachine: (auto-587823) DBG | Closing plugin on server side
	I0108 22:36:45.202896  381248 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:45.202916  381248 main.go:141] libmachine: (auto-587823) Calling .Close
	I0108 22:36:45.203183  381248 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:45.203200  381248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:45.234354  381248 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:45.234379  381248 main.go:141] libmachine: (auto-587823) Calling .Close
	I0108 22:36:45.234744  381248 main.go:141] libmachine: (auto-587823) DBG | Closing plugin on server side
	I0108 22:36:45.234793  381248 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:45.234809  381248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:45.764838  381248 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.994939717s)
	I0108 22:36:45.764914  381248 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0108 22:36:45.764926  381248 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.756079865s)
	I0108 22:36:45.766041  381248 node_ready.go:35] waiting up to 15m0s for node "auto-587823" to be "Ready" ...
	I0108 22:36:45.766824  381248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.182713244s)
	I0108 22:36:45.766885  381248 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:45.766905  381248 main.go:141] libmachine: (auto-587823) Calling .Close
	I0108 22:36:45.767286  381248 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:45.767306  381248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:45.767316  381248 main.go:141] libmachine: Making call to close driver server
	I0108 22:36:45.767317  381248 main.go:141] libmachine: (auto-587823) DBG | Closing plugin on server side
	I0108 22:36:45.767325  381248 main.go:141] libmachine: (auto-587823) Calling .Close
	I0108 22:36:45.767562  381248 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:36:45.767621  381248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:36:45.769799  381248 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0108 22:36:45.772345  381248 addons.go:508] enable addons completed in 2.443712255s: enabled=[default-storageclass storage-provisioner]
	I0108 22:36:45.795924  381248 node_ready.go:49] node "auto-587823" has status "Ready":"True"
	I0108 22:36:45.795981  381248 node_ready.go:38] duration metric: took 29.909308ms waiting for node "auto-587823" to be "Ready" ...
	I0108 22:36:45.795999  381248 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:36:45.813266  381248 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-bpvn6" in "kube-system" namespace to be "Ready" ...
	I0108 22:36:46.557945  381834 main.go:141] libmachine: (kindnet-587823) DBG | Getting to WaitForSSH function...
	I0108 22:36:46.560843  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:46.561434  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:46.561478  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:46.561681  381834 main.go:141] libmachine: (kindnet-587823) DBG | Using SSH client type: external
	I0108 22:36:46.561727  381834 main.go:141] libmachine: (kindnet-587823) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa (-rw-------)
	I0108 22:36:46.561758  381834 main.go:141] libmachine: (kindnet-587823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:36:46.561777  381834 main.go:141] libmachine: (kindnet-587823) DBG | About to run SSH command:
	I0108 22:36:46.561794  381834 main.go:141] libmachine: (kindnet-587823) DBG | exit 0
	I0108 22:36:46.664645  381834 main.go:141] libmachine: (kindnet-587823) DBG | SSH cmd err, output: <nil>: 
	I0108 22:36:46.665151  381834 main.go:141] libmachine: (kindnet-587823) KVM machine creation complete!
	I0108 22:36:46.665551  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetConfigRaw
	I0108 22:36:46.666293  381834 main.go:141] libmachine: (kindnet-587823) Calling .DriverName
	I0108 22:36:46.666578  381834 main.go:141] libmachine: (kindnet-587823) Calling .DriverName
	I0108 22:36:46.666811  381834 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 22:36:46.666843  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetState
	I0108 22:36:46.668683  381834 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 22:36:46.668706  381834 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 22:36:46.668717  381834 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 22:36:46.668727  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:46.672207  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:46.672748  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:46.672791  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:46.673033  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:46.673316  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:46.673524  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:46.673697  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:46.673923  381834 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:46.674329  381834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0108 22:36:46.674345  381834 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 22:36:46.807888  381834 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:36:46.807921  381834 main.go:141] libmachine: Detecting the provisioner...
	I0108 22:36:46.807933  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:46.811513  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:46.811921  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:46.811956  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:46.812201  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:46.812422  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:46.812597  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:46.812764  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:46.812933  381834 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:46.813415  381834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0108 22:36:46.813441  381834 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 22:36:46.948694  381834 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 22:36:46.948813  381834 main.go:141] libmachine: found compatible host: buildroot
	I0108 22:36:46.948834  381834 main.go:141] libmachine: Provisioning with buildroot...
	I0108 22:36:46.948851  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetMachineName
	I0108 22:36:46.949235  381834 buildroot.go:166] provisioning hostname "kindnet-587823"
	I0108 22:36:46.949271  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetMachineName
	I0108 22:36:46.949456  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:46.953095  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:46.953636  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:46.953706  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:46.954009  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:46.954311  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:46.954563  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:46.954738  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:46.954961  381834 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:46.955345  381834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0108 22:36:46.955401  381834 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-587823 && echo "kindnet-587823" | sudo tee /etc/hostname
	I0108 22:36:47.105449  381834 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-587823
	
	I0108 22:36:47.105478  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:47.108768  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.109228  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:47.109260  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.109471  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:47.109781  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:47.110033  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:47.110235  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:47.110428  381834 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:47.110969  381834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0108 22:36:47.111000  381834 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-587823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-587823/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-587823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:36:47.253520  381834 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:36:47.253574  381834 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-334768/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-334768/.minikube}
	I0108 22:36:47.253618  381834 buildroot.go:174] setting up certificates
	I0108 22:36:47.253640  381834 provision.go:83] configureAuth start
	I0108 22:36:47.253655  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetMachineName
	I0108 22:36:47.253986  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetIP
	I0108 22:36:47.257601  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.258082  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:47.258128  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.258412  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:47.261788  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.262193  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:47.262249  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.262347  381834 provision.go:138] copyHostCerts
	I0108 22:36:47.262448  381834 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem, removing ...
	I0108 22:36:47.262462  381834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem
	I0108 22:36:47.262577  381834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/ca.pem (1078 bytes)
	I0108 22:36:47.262735  381834 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem, removing ...
	I0108 22:36:47.262753  381834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem
	I0108 22:36:47.262793  381834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/cert.pem (1123 bytes)
	I0108 22:36:47.262920  381834 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem, removing ...
	I0108 22:36:47.262930  381834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem
	I0108 22:36:47.262964  381834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-334768/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-334768/.minikube/key.pem (1679 bytes)
	I0108 22:36:47.263028  381834 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca-key.pem org=jenkins.kindnet-587823 san=[192.168.72.236 192.168.72.236 localhost 127.0.0.1 minikube kindnet-587823]
	I0108 22:36:47.337030  381834 provision.go:172] copyRemoteCerts
	I0108 22:36:47.337136  381834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:36:47.337184  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:47.340319  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.340763  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:47.340811  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.341021  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:47.341282  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:47.341472  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:47.341645  381834 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa Username:docker}
	I0108 22:36:47.438379  381834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:36:47.467233  381834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0108 22:36:47.494746  381834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:36:47.521803  381834 provision.go:86] duration metric: configureAuth took 268.146138ms
	I0108 22:36:47.521836  381834 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:36:47.522016  381834 config.go:182] Loaded profile config "kindnet-587823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:36:47.522109  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:47.525443  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.525876  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:47.525909  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.526215  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:47.526523  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:47.526749  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:47.526930  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:47.527147  381834 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:47.527598  381834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0108 22:36:47.527626  381834 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:36:47.898486  381834 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:36:47.898525  381834 main.go:141] libmachine: Checking connection to Docker...
	I0108 22:36:47.898535  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetURL
	I0108 22:36:47.900082  381834 main.go:141] libmachine: (kindnet-587823) DBG | Using libvirt version 6000000
	I0108 22:36:47.902965  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.903435  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:47.903477  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.903766  381834 main.go:141] libmachine: Docker is up and running!
	I0108 22:36:47.903791  381834 main.go:141] libmachine: Reticulating splines...
	I0108 22:36:47.903801  381834 client.go:171] LocalClient.Create took 29.078583449s
	I0108 22:36:47.903831  381834 start.go:167] duration metric: libmachine.API.Create for "kindnet-587823" took 29.0786499s
	I0108 22:36:47.903844  381834 start.go:300] post-start starting for "kindnet-587823" (driver="kvm2")
	I0108 22:36:47.903867  381834 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:36:47.903908  381834 main.go:141] libmachine: (kindnet-587823) Calling .DriverName
	I0108 22:36:47.904233  381834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:36:47.904263  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:47.906822  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.907225  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:47.907262  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:47.907537  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:47.907848  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:47.908111  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:47.908277  381834 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa Username:docker}
	I0108 22:36:48.006080  381834 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:36:48.011849  381834 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:36:48.011881  381834 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/addons for local assets ...
	I0108 22:36:48.011991  381834 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-334768/.minikube/files for local assets ...
	I0108 22:36:48.012112  381834 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem -> 3419822.pem in /etc/ssl/certs
	I0108 22:36:48.012262  381834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:36:48.022462  381834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/ssl/certs/3419822.pem --> /etc/ssl/certs/3419822.pem (1708 bytes)
	I0108 22:36:48.051580  381834 start.go:303] post-start completed in 147.71005ms
	I0108 22:36:48.051644  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetConfigRaw
	I0108 22:36:48.052209  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetIP
	I0108 22:36:48.055593  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.056028  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:48.056062  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.056390  381834 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/kindnet-587823/config.json ...
	I0108 22:36:48.056632  381834 start.go:128] duration metric: createHost completed in 29.253660723s
	I0108 22:36:48.056659  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:48.059381  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.059823  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:48.059855  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.060019  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:48.060240  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:48.060450  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:48.060615  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:48.060797  381834 main.go:141] libmachine: Using SSH client type: native
	I0108 22:36:48.061164  381834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0108 22:36:48.061176  381834 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:36:48.204888  381834 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704753408.181896353
	
	I0108 22:36:48.204922  381834 fix.go:206] guest clock: 1704753408.181896353
	I0108 22:36:48.204932  381834 fix.go:219] Guest: 2024-01-08 22:36:48.181896353 +0000 UTC Remote: 2024-01-08 22:36:48.056644892 +0000 UTC m=+29.405881417 (delta=125.251461ms)
	I0108 22:36:48.204962  381834 fix.go:190] guest clock delta is within tolerance: 125.251461ms
	I0108 22:36:48.204968  381834 start.go:83] releasing machines lock for "kindnet-587823", held for 29.402158046s
	I0108 22:36:48.204996  381834 main.go:141] libmachine: (kindnet-587823) Calling .DriverName
	I0108 22:36:48.205432  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetIP
	I0108 22:36:48.209293  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.209825  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:48.209870  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.210111  381834 main.go:141] libmachine: (kindnet-587823) Calling .DriverName
	I0108 22:36:48.210872  381834 main.go:141] libmachine: (kindnet-587823) Calling .DriverName
	I0108 22:36:48.211177  381834 main.go:141] libmachine: (kindnet-587823) Calling .DriverName
	I0108 22:36:48.211329  381834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:36:48.211413  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:48.211549  381834 ssh_runner.go:195] Run: cat /version.json
	I0108 22:36:48.211583  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHHostname
	I0108 22:36:48.214172  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.214471  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:48.214516  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.214545  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.214673  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:48.214878  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:48.215066  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:48.215099  381834 main.go:141] libmachine: (kindnet-587823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f2:59", ip: ""} in network mk-kindnet-587823: {Iface:virbr1 ExpiryTime:2024-01-08 23:36:36 +0000 UTC Type:0 Mac:52:54:00:21:f2:59 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:kindnet-587823 Clientid:01:52:54:00:21:f2:59}
	I0108 22:36:48.215121  381834 main.go:141] libmachine: (kindnet-587823) DBG | domain kindnet-587823 has defined IP address 192.168.72.236 and MAC address 52:54:00:21:f2:59 in network mk-kindnet-587823
	I0108 22:36:48.215280  381834 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa Username:docker}
	I0108 22:36:48.215301  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHPort
	I0108 22:36:48.215495  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHKeyPath
	I0108 22:36:48.215672  381834 main.go:141] libmachine: (kindnet-587823) Calling .GetSSHUsername
	I0108 22:36:48.215820  381834 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/kindnet-587823/id_rsa Username:docker}
	I0108 22:36:48.350714  381834 ssh_runner.go:195] Run: systemctl --version
	I0108 22:36:48.358373  381834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:36:48.537057  381834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:36:48.543147  381834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:36:48.543238  381834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:36:48.563113  381834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:36:48.563144  381834 start.go:475] detecting cgroup driver to use...
	I0108 22:36:48.563246  381834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:36:48.585190  381834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:36:48.602698  381834 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:36:48.602788  381834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:36:48.623030  381834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:36:48.640126  381834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:16:05 UTC, ends at Mon 2024-01-08 22:36:49 UTC. --
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.170086493Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753409170072319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=12bafb70-fe6e-4bb9-9867-d3246866d351 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.170707186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a0820349-e8e3-4a81-8377-4fd172ff48f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.170760745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a0820349-e8e3-4a81-8377-4fd172ff48f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.170948676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c,PodSandboxId:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752534111353778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{io.kubernetes.container.hash: c3c57d92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6,PodSandboxId:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752533540374408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,},Annotations:map[string]string{io.kubernetes.container.hash: 69bd94d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570,PodSandboxId:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752532144310365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8b267c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf,PodSandboxId:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752507093529577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba6
2654850d16abea3,},Annotations:map[string]string{io.kubernetes.container.hash: af30e0f5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4,PodSandboxId:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752507211657648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b97eb78da9d1b4f
d8649df06c7ca7c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d,PodSandboxId:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752507065623484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348,PodSandboxId:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752506813371814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,},Annotations:map[string]string{io.kubernetes.container.hash: c286a60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a0820349-e8e3-4a81-8377-4fd172ff48f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.225541173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e52a5b7a-98a9-4e0b-9c06-aeb11956dcb9 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.225704814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e52a5b7a-98a9-4e0b-9c06-aeb11956dcb9 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.227328091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0514e3e7-d879-4422-98af-25d9a4c4d31d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.227994818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753409227975028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0514e3e7-d879-4422-98af-25d9a4c4d31d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.229172755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ec8241e0-b3ae-4a37-86e7-cc360e6ce744 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.229281945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ec8241e0-b3ae-4a37-86e7-cc360e6ce744 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.229602541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c,PodSandboxId:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752534111353778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{io.kubernetes.container.hash: c3c57d92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6,PodSandboxId:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752533540374408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,},Annotations:map[string]string{io.kubernetes.container.hash: 69bd94d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570,PodSandboxId:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752532144310365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8b267c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf,PodSandboxId:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752507093529577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba6
2654850d16abea3,},Annotations:map[string]string{io.kubernetes.container.hash: af30e0f5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4,PodSandboxId:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752507211657648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b97eb78da9d1b4f
d8649df06c7ca7c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d,PodSandboxId:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752507065623484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348,PodSandboxId:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752506813371814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,},Annotations:map[string]string{io.kubernetes.container.hash: c286a60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ec8241e0-b3ae-4a37-86e7-cc360e6ce744 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.278871024Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b2b408fc-5484-45d9-a7b3-8021ae447c10 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.278940895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b2b408fc-5484-45d9-a7b3-8021ae447c10 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.280920478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=48b92072-b9cb-4951-bddc-88d024c3f1d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.281997475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753409281972228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=48b92072-b9cb-4951-bddc-88d024c3f1d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.283592192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=926a4dc5-220a-4240-a877-2c50bf14bf17 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.283687495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=926a4dc5-220a-4240-a877-2c50bf14bf17 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.285050289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c,PodSandboxId:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752534111353778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{io.kubernetes.container.hash: c3c57d92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6,PodSandboxId:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752533540374408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,},Annotations:map[string]string{io.kubernetes.container.hash: 69bd94d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570,PodSandboxId:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752532144310365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8b267c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf,PodSandboxId:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752507093529577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba6
2654850d16abea3,},Annotations:map[string]string{io.kubernetes.container.hash: af30e0f5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4,PodSandboxId:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752507211657648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b97eb78da9d1b4f
d8649df06c7ca7c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d,PodSandboxId:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752507065623484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348,PodSandboxId:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752506813371814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,},Annotations:map[string]string{io.kubernetes.container.hash: c286a60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=926a4dc5-220a-4240-a877-2c50bf14bf17 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.337524010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f3d3e9b3-63ba-4ced-8756-76d0411c9c5e name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.337595328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f3d3e9b3-63ba-4ced-8756-76d0411c9c5e name=/runtime.v1.RuntimeService/Version
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.339151003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9d9cf763-a8f5-417d-82ed-a91d3e934368 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.339758905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704753409339739657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9d9cf763-a8f5-417d-82ed-a91d3e934368 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.340567427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ee3f91e-5307-4a4e-88f7-cb0d2f69d98f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.340640117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ee3f91e-5307-4a4e-88f7-cb0d2f69d98f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:36:49 default-k8s-diff-port-292054 crio[731]: time="2024-01-08 22:36:49.340815412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c,PodSandboxId:6b51dd8a2a2b8892e8acd42cd11153d2611c9bd4d40129f425ad4c7268fc012e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704752534111353778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c2430d-d84e-415e-83b3-c32e7635fe74,},Annotations:map[string]string{io.kubernetes.container.hash: c3c57d92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6,PodSandboxId:11be2fc68090634a321ee4d5fc0afda5b6cf95356d717aa1bbf03d5ae84f037a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704752533540374408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwmkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c01f0fed-4a5f-467e-a4c0-8d4f2bdb12a2,},Annotations:map[string]string{io.kubernetes.container.hash: 69bd94d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570,PodSandboxId:cf4667045a70dd9fe5220f2a651d08f8533a1e47f4840bf7b548433457e4a4ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704752532144310365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r27zw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82dae88-118a-4e13-a714-1240d48dfc4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8b267c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf,PodSandboxId:3f1a8cb24bd1c6b2abf2c5ae299e0c81d8e7da1f444160a5901faeb870f47873,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704752507093529577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e73bf885258e1ba6
2654850d16abea3,},Annotations:map[string]string{io.kubernetes.container.hash: af30e0f5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4,PodSandboxId:191b85966782541646083985440233483d4f699c96b098ee64a38dcdc07c30de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704752507211657648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25b97eb78da9d1b4f
d8649df06c7ca7c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d,PodSandboxId:98f3c0e3a1bad8e806448c7c2e20618caa5f4e39ef02f80c5969d9f12ec14cd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704752507065623484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 27ee4b9df4c37f95e2011b8bd21f25a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348,PodSandboxId:cefcbd6c3f309345f2b99af94d9273982329c934ec3c1a7438d0e797f01a6db6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704752506813371814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-292054,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e3295cbb0d1303870eed006ab815b2a8,},Annotations:map[string]string{io.kubernetes.container.hash: c286a60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ee3f91e-5307-4a4e-88f7-cb0d2f69d98f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37ec1a7ab6aa1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   6b51dd8a2a2b8       storage-provisioner
	6c02f8fe98e2f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   11be2fc680906       kube-proxy-bwmkb
	a28f303c4e97b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   cf4667045a70d       coredns-5dd5756b68-r27zw
	87f8525af63e6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   191b859667825       kube-scheduler-default-k8s-diff-port-292054
	bcf8add63ad3e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   3f1a8cb24bd1c       etcd-default-k8s-diff-port-292054
	3e507ce6d6a23       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   98f3c0e3a1bad       kube-controller-manager-default-k8s-diff-port-292054
	491ed169ad2f7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   cefcbd6c3f309       kube-apiserver-default-k8s-diff-port-292054
	
	
	==> coredns [a28f303c4e97bb0df17bb7a9449f25ecbceb6d662b820fab77ea2ab1b2283570] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56909 - 15984 "HINFO IN 1941820745804244463.1315308648900132827. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025653962s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-292054
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-292054
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=default-k8s-diff-port-292054
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_21_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:21:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-292054
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:36:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:32:29 +0000   Mon, 08 Jan 2024 22:21:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:32:29 +0000   Mon, 08 Jan 2024 22:21:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:32:29 +0000   Mon, 08 Jan 2024 22:21:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:32:29 +0000   Mon, 08 Jan 2024 22:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.18
	  Hostname:    default-k8s-diff-port-292054
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 00f023c105c24aeda2854315360f800d
	  System UUID:                00f023c1-05c2-4aed-a285-4315360f800d
	  Boot ID:                    fec1a090-c5ed-42d8-b7f7-12fa03a91aa5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-r27zw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-292054                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-default-k8s-diff-port-292054             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-292054    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-bwmkb                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-292054             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-jm9lg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m                kubelet          Node default-k8s-diff-port-292054 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node default-k8s-diff-port-292054 event: Registered Node default-k8s-diff-port-292054 in Controller
	
	
	==> dmesg <==
	[Jan 8 22:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074352] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan 8 22:16] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.746837] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149245] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.649926] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.392742] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.151355] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.209600] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.141618] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[  +0.362096] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[ +18.964784] systemd-fstab-generator[930]: Ignoring "noauto" for root device
	[ +21.812844] kauditd_printk_skb: 29 callbacks suppressed
	[Jan 8 22:21] systemd-fstab-generator[3542]: Ignoring "noauto" for root device
	[ +10.861340] systemd-fstab-generator[3864]: Ignoring "noauto" for root device
	[Jan 8 22:22] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.113569] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [bcf8add63ad3e64a52d016a7fde0a6d5798c7c8c1a9278e708efaaa7eb5514bf] <==
	{"level":"info","ts":"2024-01-08T22:21:49.750869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3895747abc9dda3 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T22:21:49.750895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e3895747abc9dda3 elected leader e3895747abc9dda3 at term 2"}
	{"level":"info","ts":"2024-01-08T22:21:49.75577Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e3895747abc9dda3","local-member-attributes":"{Name:default-k8s-diff-port-292054 ClientURLs:[https://192.168.50.18:2379]}","request-path":"/0/members/e3895747abc9dda3/attributes","cluster-id":"3c16f1003b534ab0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T22:21:49.75591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:21:49.757213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T22:21:49.757402Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:21:49.757879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:21:49.761152Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.18:2379"}
	{"level":"info","ts":"2024-01-08T22:21:49.767549Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T22:21:49.767706Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T22:21:49.812861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3c16f1003b534ab0","local-member-id":"e3895747abc9dda3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:21:49.813036Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:21:49.816513Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:31:49.842539Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-01-08T22:31:49.845527Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":722,"took":"2.535269ms","hash":103817928}
	{"level":"info","ts":"2024-01-08T22:31:49.845612Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":103817928,"revision":722,"compact-revision":-1}
	{"level":"info","ts":"2024-01-08T22:35:40.466588Z","caller":"traceutil/trace.go:171","msg":"trace[1239401888] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"149.179162ms","start":"2024-01-08T22:35:40.317348Z","end":"2024-01-08T22:35:40.466527Z","steps":["trace[1239401888] 'process raft request'  (duration: 148.98816ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:36:14.99921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.549315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-01-08T22:36:14.999364Z","caller":"traceutil/trace.go:171","msg":"trace[792110949] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1180; }","duration":"262.785081ms","start":"2024-01-08T22:36:14.736557Z","end":"2024-01-08T22:36:14.999342Z","steps":["trace[792110949] 'range keys from in-memory index tree'  (duration: 262.355921ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:36:15.198211Z","caller":"traceutil/trace.go:171","msg":"trace[1379278372] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"191.864567ms","start":"2024-01-08T22:36:15.006318Z","end":"2024-01-08T22:36:15.198182Z","steps":["trace[1379278372] 'process raft request'  (duration: 190.982394ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:36:50.023773Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":965}
	{"level":"warn","ts":"2024-01-08T22:36:50.025364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.184773ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15970763645251573966 username:\"kube-apiserver-etcd-client\" auth_revision:1 > compaction:<revision:965 > ","response":"size:5"}
	{"level":"info","ts":"2024-01-08T22:36:50.02571Z","caller":"traceutil/trace.go:171","msg":"trace[203476727] compact","detail":"{revision:965; response_revision:1208; }","duration":"137.447634ms","start":"2024-01-08T22:36:49.888219Z","end":"2024-01-08T22:36:50.025667Z","steps":["trace[203476727] 'check and update compact revision'  (duration: 129.910918ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:36:50.026526Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":965,"took":"2.189665ms","hash":4106310537}
	{"level":"info","ts":"2024-01-08T22:36:50.02661Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4106310537,"revision":965,"compact-revision":722}
	
	
	==> kernel <==
	 22:36:50 up 20 min,  0 users,  load average: 0.24, 0.29, 0.27
	Linux default-k8s-diff-port-292054 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [491ed169ad2f75182ac1dc201393deff29d7e457c64c1623a461fe8935e94348] <==
	W0108 22:31:53.159377       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:53.159803       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:31:53.159864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:31:53.159377       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:31:53.160076       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:31:53.161351       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:32:52.007181       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:32:53.160150       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:32:53.160241       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:32:53.160253       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:32:53.161553       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:32:53.161662       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:32:53.161719       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:33:52.007219       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:34:52.006593       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 22:34:53.160357       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:34:53.160733       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 22:34:53.160786       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 22:34:53.162930       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:34:53.163059       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:34:53.163087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:35:52.006682       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [3e507ce6d6a23c3d7b2bb084b04bbbce171f981aa79a87452f2e85deafff443d] <==
	I0108 22:31:08.651621       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:31:38.290612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:31:38.664565       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:32:08.308079       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:08.674923       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:32:38.315230       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:32:38.685843       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:33:08.321238       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:08.700256       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 22:33:21.427120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="280.556µs"
	I0108 22:33:33.421725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.787µs"
	E0108 22:33:38.329259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:33:38.710653       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:08.339817       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:08.723840       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:34:38.347173       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:34:38.735175       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:35:08.357381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:35:08.758907       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:35:38.365981       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:35:38.772410       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:36:08.380905       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:36:08.804692       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 22:36:38.390623       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 22:36:38.815877       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6c02f8fe98e2fd226d47adb2a0f03f52343bf00044dc42832ac859557ca00cf6] <==
	I0108 22:22:14.112794       1 server_others.go:69] "Using iptables proxy"
	I0108 22:22:14.168091       1 node.go:141] Successfully retrieved node IP: 192.168.50.18
	I0108 22:22:14.285633       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 22:22:14.285681       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:22:14.290881       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:22:14.292798       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:22:14.295558       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:22:14.295618       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:22:14.299022       1 config.go:188] "Starting service config controller"
	I0108 22:22:14.299956       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:22:14.301948       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:22:14.302136       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:22:14.302561       1 config.go:315] "Starting node config controller"
	I0108 22:22:14.302699       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:22:14.402937       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:22:14.403033       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:22:14.403102       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [87f8525af63e6094ee5ef80a5660323bff344328f7bdd4616332de8c92a48cb4] <==
	W0108 22:21:53.159525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:21:53.159601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:21:53.322769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:21:53.322821       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:21:53.349670       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:21:53.349743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:21:53.353417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:53.353630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:53.367494       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:21:53.367548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:21:53.397386       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:21:53.397597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:21:53.453807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:21:53.453906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 22:21:53.512738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:53.512791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:53.581662       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:21:53.581766       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:21:53.638788       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:21:53.638909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 22:21:53.650904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:21:53.651042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:21:53.755294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:21:53.755406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0108 22:21:55.674207       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:16:05 UTC, ends at Mon 2024-01-08 22:36:50 UTC. --
	Jan 08 22:33:56 default-k8s-diff-port-292054 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:33:56 default-k8s-diff-port-292054 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:33:56 default-k8s-diff-port-292054 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:34:00 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:00.409521    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:14 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:14.403069    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:25 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:25.406975    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:37 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:37.401849    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:51 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:51.401874    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:34:56 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:34:56.495701    3871 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:34:56 default-k8s-diff-port-292054 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:34:56 default-k8s-diff-port-292054 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:34:56 default-k8s-diff-port-292054 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:35:06 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:06.404845    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:35:19 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:19.402182    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:35:30 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:30.406565    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:35:44 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:44.405310    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:35:56 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:56.494540    3871 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:35:56 default-k8s-diff-port-292054 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:35:56 default-k8s-diff-port-292054 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:35:56 default-k8s-diff-port-292054 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:35:59 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:35:59.404808    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:36:10 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:36:10.403608    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:36:21 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:36:21.401964    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:36:34 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:36:34.402548    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	Jan 08 22:36:47 default-k8s-diff-port-292054 kubelet[3871]: E0108 22:36:47.402153    3871 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jm9lg" podUID="b94afab5-f573-4ed1-bc29-64eb8e90c574"
	
	
	==> storage-provisioner [37ec1a7ab6aa1132c749e6dc9ea00205f0c272d44a11cd5bcf96291ab70cae8c] <==
	I0108 22:22:14.318703       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:22:14.345088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:22:14.345204       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:22:14.357415       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:22:14.358657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-292054_be176cbb-a878-4179-b11c-1e8615a95ccf!
	I0108 22:22:14.363557       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56fe9315-e25a-4bc3-80aa-74f0ea93b554", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-292054_be176cbb-a878-4179-b11c-1e8615a95ccf became leader
	I0108 22:22:14.461297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-292054_be176cbb-a878-4179-b11c-1e8615a95ccf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-292054 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-jm9lg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-292054 describe pod metrics-server-57f55c9bc5-jm9lg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-292054 describe pod metrics-server-57f55c9bc5-jm9lg: exit status 1 (99.233957ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-jm9lg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-292054 describe pod metrics-server-57f55c9bc5-jm9lg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (80.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (140.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-154365 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-154365 --alsologtostderr -v=3: exit status 82 (2m1.814538839s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-154365"  ...
	* Stopping node "newest-cni-154365"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:36:16.536391  381687 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:36:16.536549  381687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:36:16.536555  381687 out.go:309] Setting ErrFile to fd 2...
	I0108 22:36:16.536559  381687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:36:16.536804  381687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:36:16.537154  381687 out.go:303] Setting JSON to false
	I0108 22:36:16.537266  381687 mustload.go:65] Loading cluster: newest-cni-154365
	I0108 22:36:16.537673  381687 config.go:182] Loaded profile config "newest-cni-154365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:36:16.537742  381687 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/newest-cni-154365/config.json ...
	I0108 22:36:16.537894  381687 mustload.go:65] Loading cluster: newest-cni-154365
	I0108 22:36:16.538003  381687 config.go:182] Loaded profile config "newest-cni-154365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 22:36:16.538028  381687 stop.go:39] StopHost: newest-cni-154365
	I0108 22:36:16.538616  381687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:36:16.538662  381687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:36:16.559507  381687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I0108 22:36:16.560310  381687 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:36:16.561215  381687 main.go:141] libmachine: Using API Version  1
	I0108 22:36:16.561237  381687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:36:16.561747  381687 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:36:16.564003  381687 out.go:177] * Stopping node "newest-cni-154365"  ...
	I0108 22:36:16.565532  381687 main.go:141] libmachine: Stopping "newest-cni-154365"...
	I0108 22:36:16.565560  381687 main.go:141] libmachine: (newest-cni-154365) Calling .GetState
	I0108 22:36:16.571623  381687 main.go:141] libmachine: (newest-cni-154365) Calling .Stop
	I0108 22:36:16.581455  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 0/60
	I0108 22:36:17.583200  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 1/60
	I0108 22:36:18.584959  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 2/60
	I0108 22:36:19.586634  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 3/60
	I0108 22:36:20.588504  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 4/60
	I0108 22:36:21.591083  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 5/60
	I0108 22:36:22.593543  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 6/60
	I0108 22:36:23.595670  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 7/60
	I0108 22:36:24.596998  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 8/60
	I0108 22:36:25.598957  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 9/60
	I0108 22:36:26.601851  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 10/60
	I0108 22:36:27.603924  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 11/60
	I0108 22:36:28.606720  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 12/60
	I0108 22:36:29.609242  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 13/60
	I0108 22:36:30.611793  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 14/60
	I0108 22:36:31.613928  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 15/60
	I0108 22:36:32.616264  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 16/60
	I0108 22:36:33.618186  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 17/60
	I0108 22:36:34.620921  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 18/60
	I0108 22:36:35.622990  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 19/60
	I0108 22:36:36.624914  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 20/60
	I0108 22:36:37.626283  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 21/60
	I0108 22:36:38.627812  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 22/60
	I0108 22:36:39.630210  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 23/60
	I0108 22:36:40.632063  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 24/60
	I0108 22:36:41.634564  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 25/60
	I0108 22:36:42.636215  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 26/60
	I0108 22:36:43.638592  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 27/60
	I0108 22:36:44.640349  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 28/60
	I0108 22:36:45.642129  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 29/60
	I0108 22:36:46.644700  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 30/60
	I0108 22:36:47.646425  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 31/60
	I0108 22:36:48.649259  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 32/60
	I0108 22:36:49.650688  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 33/60
	I0108 22:36:50.653422  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 34/60
	I0108 22:36:51.655424  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 35/60
	I0108 22:36:52.657555  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 36/60
	I0108 22:36:53.659863  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 37/60
	I0108 22:36:54.663156  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 38/60
	I0108 22:36:55.665341  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 39/60
	I0108 22:36:56.667250  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 40/60
	I0108 22:36:57.668928  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 41/60
	I0108 22:36:58.670768  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 42/60
	I0108 22:36:59.673304  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 43/60
	I0108 22:37:00.675535  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 44/60
	I0108 22:37:01.678313  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 45/60
	I0108 22:37:02.680761  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 46/60
	I0108 22:37:03.683000  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 47/60
	I0108 22:37:04.685384  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 48/60
	I0108 22:37:05.687134  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 49/60
	I0108 22:37:06.689393  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 50/60
	I0108 22:37:07.692097  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 51/60
	I0108 22:37:08.694294  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 52/60
	I0108 22:37:09.696234  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 53/60
	I0108 22:37:10.698167  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 54/60
	I0108 22:37:11.700272  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 55/60
	I0108 22:37:12.703006  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 56/60
	I0108 22:37:13.704477  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 57/60
	I0108 22:37:14.706013  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 58/60
	I0108 22:37:15.708569  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 59/60
	I0108 22:37:16.709963  381687 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:37:16.710034  381687 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:37:16.710063  381687 retry.go:31] will retry after 833.600948ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:37:17.543818  381687 stop.go:39] StopHost: newest-cni-154365
	I0108 22:37:17.544407  381687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:37:17.544475  381687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:37:17.562785  381687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33141
	I0108 22:37:17.563225  381687 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:37:17.563763  381687 main.go:141] libmachine: Using API Version  1
	I0108 22:37:17.563792  381687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:37:17.564113  381687 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:37:17.566738  381687 out.go:177] * Stopping node "newest-cni-154365"  ...
	I0108 22:37:17.568475  381687 main.go:141] libmachine: Stopping "newest-cni-154365"...
	I0108 22:37:17.568500  381687 main.go:141] libmachine: (newest-cni-154365) Calling .GetState
	I0108 22:37:17.570588  381687 main.go:141] libmachine: (newest-cni-154365) Calling .Stop
	I0108 22:37:17.574389  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 0/60
	I0108 22:37:18.576348  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 1/60
	I0108 22:37:19.577825  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 2/60
	I0108 22:37:20.579598  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 3/60
	I0108 22:37:21.582448  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 4/60
	I0108 22:37:22.584517  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 5/60
	I0108 22:37:23.586434  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 6/60
	I0108 22:37:24.588769  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 7/60
	I0108 22:37:25.591412  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 8/60
	I0108 22:37:26.593098  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 9/60
	I0108 22:37:27.595033  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 10/60
	I0108 22:37:28.597801  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 11/60
	I0108 22:37:29.599921  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 12/60
	I0108 22:37:30.602418  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 13/60
	I0108 22:37:31.604786  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 14/60
	I0108 22:37:32.607405  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 15/60
	I0108 22:37:33.609618  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 16/60
	I0108 22:37:34.612353  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 17/60
	I0108 22:37:35.614796  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 18/60
	I0108 22:37:36.616892  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 19/60
	I0108 22:37:37.619019  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 20/60
	I0108 22:37:38.620880  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 21/60
	I0108 22:37:39.623330  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 22/60
	I0108 22:37:40.625488  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 23/60
	I0108 22:37:41.628038  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 24/60
	I0108 22:37:42.630687  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 25/60
	I0108 22:37:43.632220  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 26/60
	I0108 22:37:44.634108  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 27/60
	I0108 22:37:45.636131  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 28/60
	I0108 22:37:46.638363  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 29/60
	I0108 22:37:47.640717  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 30/60
	I0108 22:37:48.642801  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 31/60
	I0108 22:37:49.644782  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 32/60
	I0108 22:37:50.646596  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 33/60
	I0108 22:37:51.648929  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 34/60
	I0108 22:37:52.650676  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 35/60
	I0108 22:37:53.652363  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 36/60
	I0108 22:37:54.654591  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 37/60
	I0108 22:37:55.656682  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 38/60
	I0108 22:37:56.659354  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 39/60
	I0108 22:37:57.662476  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 40/60
	I0108 22:37:58.663888  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 41/60
	I0108 22:37:59.666514  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 42/60
	I0108 22:38:00.816681  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 43/60
	I0108 22:38:01.818477  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 44/60
	I0108 22:38:02.820789  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 45/60
	I0108 22:38:03.822353  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 46/60
	I0108 22:38:04.825045  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 47/60
	I0108 22:38:05.827035  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 48/60
	I0108 22:38:06.828532  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 49/60
	I0108 22:38:07.831185  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 50/60
	I0108 22:38:08.833221  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 51/60
	I0108 22:38:09.836633  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 52/60
	I0108 22:38:10.838732  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 53/60
	I0108 22:38:11.840836  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 54/60
	I0108 22:38:12.844535  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 55/60
	I0108 22:38:13.849121  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 56/60
	I0108 22:38:14.850818  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 57/60
	I0108 22:38:15.854345  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 58/60
	I0108 22:38:16.859600  381687 main.go:141] libmachine: (newest-cni-154365) Waiting for machine to stop 59/60
	I0108 22:38:18.256684  381687 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 22:38:18.256751  381687 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 22:38:18.259091  381687 out.go:177] 
	W0108 22:38:18.260800  381687 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 22:38:18.260824  381687 out.go:239] * 
	* 
	W0108 22:38:18.264480  381687 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:38:18.266184  381687 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-154365 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-154365 -n newest-cni-154365
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-154365 -n newest-cni-154365: exit status 3 (18.638460099s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:38:36.903816  385614 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0108 22:38:36.903851  385614 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-154365" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (140.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-154365 -n newest-cni-154365
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-154365 -n newest-cni-154365: exit status 3 (3.226179895s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:38:40.131747  385898 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0108 22:38:40.131779  385898 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-154365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0108 22:38:42.120428  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-154365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.159114119s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-154365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-154365 -n newest-cni-154365
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-154365 -n newest-cni-154365: exit status 3 (3.057994166s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 22:38:49.347868  385969 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0108 22:38:49.347898  385969 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-154365" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.44s)

                                                
                                    

Test pass (238/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.74
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 5.3
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 4.92
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.15
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
26 TestBinaryMirror 0.58
27 TestOffline 113.27
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 152.2
34 TestAddons/parallel/Registry 17.63
36 TestAddons/parallel/InspektorGadget 11.62
37 TestAddons/parallel/MetricsServer 6.06
38 TestAddons/parallel/HelmTiller 16.58
40 TestAddons/parallel/CSI 96.67
41 TestAddons/parallel/Headlamp 15.69
42 TestAddons/parallel/CloudSpanner 6.69
44 TestAddons/parallel/NvidiaDevicePlugin 5.61
45 TestAddons/parallel/Yakd 5.01
48 TestAddons/serial/GCPAuth/Namespaces 0.12
50 TestCertOptions 52.83
51 TestCertExpiration 339.32
53 TestForceSystemdFlag 75.99
54 TestForceSystemdEnv 85.92
56 TestKVMDriverInstallOrUpdate 3.49
60 TestErrorSpam/setup 47.59
61 TestErrorSpam/start 0.4
62 TestErrorSpam/status 0.81
63 TestErrorSpam/pause 1.62
64 TestErrorSpam/unpause 1.78
65 TestErrorSpam/stop 2.28
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 99.77
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 41.44
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.07
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
77 TestFunctional/serial/CacheCmd/cache/add_local 1.52
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
82 TestFunctional/serial/CacheCmd/cache/delete 0.13
83 TestFunctional/serial/MinikubeKubectlCmd 0.13
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
85 TestFunctional/serial/ExtraConfig 32.7
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.51
88 TestFunctional/serial/LogsFileCmd 1.55
89 TestFunctional/serial/InvalidService 4.6
91 TestFunctional/parallel/ConfigCmd 0.49
92 TestFunctional/parallel/DashboardCmd 21.82
93 TestFunctional/parallel/DryRun 0.31
94 TestFunctional/parallel/InternationalLanguage 0.16
95 TestFunctional/parallel/StatusCmd 1.18
99 TestFunctional/parallel/ServiceCmdConnect 12.8
100 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/PersistentVolumeClaim 52.9
103 TestFunctional/parallel/SSHCmd 0.47
104 TestFunctional/parallel/CpCmd 1.62
105 TestFunctional/parallel/MySQL 28.22
106 TestFunctional/parallel/FileSync 0.31
107 TestFunctional/parallel/CertSync 1.65
111 TestFunctional/parallel/NodeLabels 0.07
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
115 TestFunctional/parallel/License 0.2
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.78
118 TestFunctional/parallel/ServiceCmd/DeployApp 12.24
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
131 TestFunctional/parallel/ServiceCmd/List 0.57
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
134 TestFunctional/parallel/ServiceCmd/Format 0.48
135 TestFunctional/parallel/ServiceCmd/URL 0.45
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
137 TestFunctional/parallel/MountCmd/any-port 19.35
138 TestFunctional/parallel/ProfileCmd/profile_list 0.35
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.36
144 TestFunctional/parallel/ImageCommands/ImageBuild 5.53
145 TestFunctional/parallel/ImageCommands/Setup 1.05
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.56
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.76
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.43
149 TestFunctional/parallel/MountCmd/specific-port 2.21
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.5
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.91
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.94
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.22
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.28
155 TestFunctional/delete_addon-resizer_images 0.07
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestIngressAddonLegacy/StartLegacyK8sCluster 110.4
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.52
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.63
168 TestJSONOutput/start/Command 100.82
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.7
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.69
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 7.11
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.23
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 100.36
200 TestMountStart/serial/StartWithMountFirst 28.14
201 TestMountStart/serial/VerifyMountFirst 0.41
202 TestMountStart/serial/StartWithMountSecond 33.62
203 TestMountStart/serial/VerifyMountSecond 0.42
204 TestMountStart/serial/DeleteFirst 0.68
205 TestMountStart/serial/VerifyMountPostDelete 0.43
206 TestMountStart/serial/Stop 1.23
207 TestMountStart/serial/RestartStopped 23.27
208 TestMountStart/serial/VerifyMountPostStop 0.42
211 TestMultiNode/serial/FreshStart2Nodes 106.59
212 TestMultiNode/serial/DeployApp2Nodes 4.03
214 TestMultiNode/serial/AddNode 46.98
215 TestMultiNode/serial/MultiNodeLabels 0.06
216 TestMultiNode/serial/ProfileList 0.22
217 TestMultiNode/serial/CopyFile 7.76
218 TestMultiNode/serial/StopNode 3.01
219 TestMultiNode/serial/StartAfterStop 29.52
221 TestMultiNode/serial/DeleteNode 1.6
223 TestMultiNode/serial/RestartMultiNode 536.94
224 TestMultiNode/serial/ValidateNameConflict 51.64
237 TestKubernetesUpgrade 236.48
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
241 TestNoKubernetes/serial/StartWithK8s 117.12
242 TestStoppedBinaryUpgrade/Setup 0.34
244 TestNoKubernetes/serial/StartWithStopK8s 9.26
245 TestNoKubernetes/serial/Start 31.23
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
247 TestNoKubernetes/serial/ProfileList 1.38
248 TestNoKubernetes/serial/Stop 2.16
256 TestNoKubernetes/serial/StartNoArgs 55.24
257 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
259 TestPause/serial/Start 67.85
267 TestNetworkPlugins/group/false 5.6
271 TestPause/serial/SecondStartNoReconfiguration 97.53
273 TestStartStop/group/old-k8s-version/serial/FirstStart 164.97
274 TestStoppedBinaryUpgrade/MinikubeLogs 0.48
276 TestStartStop/group/no-preload/serial/FirstStart 172.61
277 TestPause/serial/Pause 1.32
278 TestPause/serial/VerifyStatus 0.3
279 TestPause/serial/Unpause 1.29
280 TestPause/serial/PauseAgain 1.51
281 TestPause/serial/DeletePaused 1.01
282 TestPause/serial/VerifyDeletedResources 0.76
284 TestStartStop/group/embed-certs/serial/FirstStart 136.74
286 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 135.86
287 TestStartStop/group/old-k8s-version/serial/DeployApp 9.64
288 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.24
290 TestStartStop/group/no-preload/serial/DeployApp 9.41
291 TestStartStop/group/embed-certs/serial/DeployApp 9.37
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.34
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.36
300 TestStartStop/group/old-k8s-version/serial/SecondStart 411.23
303 TestStartStop/group/no-preload/serial/SecondStart 603.53
304 TestStartStop/group/embed-certs/serial/SecondStart 860.76
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 882.38
315 TestStartStop/group/newest-cni/serial/FirstStart 65.72
317 TestNetworkPlugins/group/auto/Start 113.68
318 TestStartStop/group/newest-cni/serial/DeployApp 0
319 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.3
321 TestNetworkPlugins/group/kindnet/Start 80.29
322 TestNetworkPlugins/group/calico/Start 96.33
323 TestNetworkPlugins/group/auto/KubeletFlags 0.41
324 TestNetworkPlugins/group/auto/NetCatPod 15.23
325 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
326 TestNetworkPlugins/group/auto/DNS 0.2
327 TestNetworkPlugins/group/auto/Localhost 0.2
328 TestNetworkPlugins/group/auto/HairPin 0.2
329 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
330 TestNetworkPlugins/group/kindnet/NetCatPod 13.29
331 TestNetworkPlugins/group/kindnet/DNS 0.28
332 TestNetworkPlugins/group/kindnet/Localhost 0.23
333 TestNetworkPlugins/group/kindnet/HairPin 0.22
334 TestNetworkPlugins/group/custom-flannel/Start 98.25
335 TestNetworkPlugins/group/enable-default-cni/Start 119.02
336 TestNetworkPlugins/group/calico/ControllerPod 6.01
337 TestNetworkPlugins/group/calico/KubeletFlags 0.29
338 TestNetworkPlugins/group/calico/NetCatPod 16.32
340 TestStartStop/group/newest-cni/serial/SecondStart 414.44
341 TestNetworkPlugins/group/calico/DNS 0.24
342 TestNetworkPlugins/group/calico/Localhost 0.17
343 TestNetworkPlugins/group/calico/HairPin 0.19
344 TestNetworkPlugins/group/flannel/Start 340.34
345 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
346 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.29
347 TestNetworkPlugins/group/custom-flannel/DNS 0.2
348 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
349 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
350 TestNetworkPlugins/group/bridge/Start 337.24
351 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
352 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.3
353 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
354 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
355 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
356 TestNetworkPlugins/group/flannel/ControllerPod 6.01
357 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
358 TestNetworkPlugins/group/flannel/NetCatPod 14.22
359 TestNetworkPlugins/group/flannel/DNS 0.21
360 TestNetworkPlugins/group/flannel/Localhost 0.2
361 TestNetworkPlugins/group/flannel/HairPin 0.19
362 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
365 TestStartStop/group/newest-cni/serial/Pause 2.71
366 TestNetworkPlugins/group/bridge/KubeletFlags 0.56
367 TestNetworkPlugins/group/bridge/NetCatPod 17.29
368 TestNetworkPlugins/group/bridge/DNS 0.2
369 TestNetworkPlugins/group/bridge/Localhost 0.17
370 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (8.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-947844 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-947844 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.737538979s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-947844
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-947844: exit status 85 (80.619308ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |          |
	|         | -p download-only-947844        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:02:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:02:04.407010  341994 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:02:04.407133  341994 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:02:04.407143  341994 out.go:309] Setting ErrFile to fd 2...
	I0108 21:02:04.407148  341994 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:02:04.407343  341994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	W0108 21:02:04.407494  341994 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-334768/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-334768/.minikube/config/config.json: no such file or directory
	I0108 21:02:04.408085  341994 out.go:303] Setting JSON to true
	I0108 21:02:04.409253  341994 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6251,"bootTime":1704741474,"procs":502,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:02:04.409322  341994 start.go:138] virtualization: kvm guest
	I0108 21:02:04.412015  341994 out.go:97] [download-only-947844] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:02:04.413570  341994 out.go:169] MINIKUBE_LOCATION=17866
	I0108 21:02:04.412147  341994 notify.go:220] Checking for updates...
	W0108 21:02:04.412170  341994 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 21:02:04.416305  341994 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:02:04.417750  341994 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:02:04.419313  341994 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:02:04.420565  341994 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 21:02:04.422936  341994 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 21:02:04.423186  341994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:02:04.455459  341994 out.go:97] Using the kvm2 driver based on user configuration
	I0108 21:02:04.455487  341994 start.go:298] selected driver: kvm2
	I0108 21:02:04.455492  341994 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:02:04.455851  341994 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:02:04.455932  341994 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:02:04.470425  341994 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:02:04.470476  341994 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:02:04.470926  341994 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0108 21:02:04.471078  341994 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 21:02:04.471148  341994 cni.go:84] Creating CNI manager for ""
	I0108 21:02:04.471162  341994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:02:04.471172  341994 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:02:04.471179  341994 start_flags.go:321] config:
	{Name:download-only-947844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-947844 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:02:04.471511  341994 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:02:04.473555  341994 out.go:97] Downloading VM boot image ...
	I0108 21:02:04.473592  341994 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 21:02:07.238923  341994 out.go:97] Starting control plane node download-only-947844 in cluster download-only-947844
	I0108 21:02:07.238956  341994 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 21:02:07.268677  341994 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 21:02:07.268713  341994 cache.go:56] Caching tarball of preloaded images
	I0108 21:02:07.268934  341994 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 21:02:07.270853  341994 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 21:02:07.270872  341994 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:02:07.303630  341994 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-947844"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-947844 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-947844 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.298732794s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-947844
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-947844: exit status 85 (87.15795ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |          |
	|         | -p download-only-947844        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |          |
	|         | -p download-only-947844        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:02:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:02:13.227945  342050 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:02:13.228227  342050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:02:13.228237  342050 out.go:309] Setting ErrFile to fd 2...
	I0108 21:02:13.228242  342050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:02:13.228449  342050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	W0108 21:02:13.228577  342050 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-334768/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-334768/.minikube/config/config.json: no such file or directory
	I0108 21:02:13.228973  342050 out.go:303] Setting JSON to true
	I0108 21:02:13.230071  342050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6259,"bootTime":1704741474,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:02:13.230129  342050 start.go:138] virtualization: kvm guest
	I0108 21:02:13.236646  342050 out.go:97] [download-only-947844] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:02:13.238127  342050 out.go:169] MINIKUBE_LOCATION=17866
	I0108 21:02:13.236823  342050 notify.go:220] Checking for updates...
	I0108 21:02:13.240872  342050 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:02:13.242296  342050 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:02:13.243560  342050 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:02:13.244858  342050 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 21:02:13.247446  342050 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 21:02:13.247911  342050 config.go:182] Loaded profile config "download-only-947844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0108 21:02:13.247970  342050 start.go:810] api.Load failed for download-only-947844: filestore "download-only-947844": Docker machine "download-only-947844" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 21:02:13.248052  342050 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 21:02:13.248105  342050 start.go:810] api.Load failed for download-only-947844: filestore "download-only-947844": Docker machine "download-only-947844" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 21:02:13.279570  342050 out.go:97] Using the kvm2 driver based on existing profile
	I0108 21:02:13.279620  342050 start.go:298] selected driver: kvm2
	I0108 21:02:13.279625  342050 start.go:902] validating driver "kvm2" against &{Name:download-only-947844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-947844 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:02:13.280021  342050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:02:13.280098  342050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-334768/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:02:13.294625  342050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:02:13.295401  342050 cni.go:84] Creating CNI manager for ""
	I0108 21:02:13.295425  342050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:02:13.295442  342050 start_flags.go:321] config:
	{Name:download-only-947844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-947844 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:02:13.295635  342050 iso.go:125] acquiring lock: {Name:mk6d83406bd55e975f50b4a725fa9a5fba62cb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:02:13.297430  342050 out.go:97] Starting control plane node download-only-947844 in cluster download-only-947844
	I0108 21:02:13.297455  342050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:02:13.332040  342050 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:02:13.332086  342050 cache.go:56] Caching tarball of preloaded images
	I0108 21:02:13.332237  342050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:02:13.334366  342050 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 21:02:13.334393  342050 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:02:13.364189  342050 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:02:16.805548  342050 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:02:16.805649  342050 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-334768/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 21:02:17.736116  342050 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:02:17.736278  342050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/download-only-947844/config.json ...
	I0108 21:02:17.736491  342050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:02:17.736685  342050 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17866-334768/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-947844"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-947844 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-947844 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.92465365s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-947844
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-947844: exit status 85 (78.857855ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |          |
	|         | -p download-only-947844           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |          |
	|         | -p download-only-947844           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-947844 | jenkins | v1.32.0 | 08 Jan 24 21:02 UTC |          |
	|         | -p download-only-947844           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:02:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:02:18.615921  342095 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:02:18.616228  342095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:02:18.616239  342095 out.go:309] Setting ErrFile to fd 2...
	I0108 21:02:18.616244  342095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:02:18.616502  342095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	W0108 21:02:18.616660  342095 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-334768/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-334768/.minikube/config/config.json: no such file or directory
	I0108 21:02:18.617139  342095 out.go:303] Setting JSON to true
	I0108 21:02:18.618327  342095 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6265,"bootTime":1704741474,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:02:18.618399  342095 start.go:138] virtualization: kvm guest
	I0108 21:02:18.620584  342095 out.go:97] [download-only-947844] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:02:18.622332  342095 out.go:169] MINIKUBE_LOCATION=17866
	I0108 21:02:18.620822  342095 notify.go:220] Checking for updates...
	I0108 21:02:18.625129  342095 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:02:18.626791  342095 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:02:18.628429  342095 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:02:18.630259  342095 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-947844"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-947844
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-537343 --alsologtostderr --binary-mirror http://127.0.0.1:43023 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-537343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-537343
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (113.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-778466 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-778466 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m51.971485333s)
helpers_test.go:175: Cleaning up "offline-crio-778466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-778466
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-778466: (1.294265968s)
--- PASS: TestOffline (113.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-417518
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-417518: exit status 85 (69.861478ms)

                                                
                                                
-- stdout --
	* Profile "addons-417518" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-417518"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-417518
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-417518: exit status 85 (70.695917ms)

                                                
                                                
-- stdout --
	* Profile "addons-417518" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-417518"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (152.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-417518 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-417518 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.202578729s)
--- PASS: TestAddons/Setup (152.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 29.460989ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-x6wr5" [87175079-3fbe-407b-b38d-1ef946385d32] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020067941s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sxr27" [b9ba0c2a-2815-46d7-a4ca-7b81a07d2778] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005315147s
addons_test.go:340: (dbg) Run:  kubectl --context addons-417518 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-417518 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-417518 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.700567303s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 ip
2024/01/08 21:05:12 [DEBUG] GET http://192.168.39.218:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-417518 addons disable registry --alsologtostderr -v=1: (1.700068509s)
--- PASS: TestAddons/parallel/Registry (17.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j6g7s" [09e4de8a-ead7-40bc-971c-2b6bea12db53] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004904724s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-417518
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-417518: (6.613284695s)
--- PASS: TestAddons/parallel/InspektorGadget (11.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 29.767426ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-cvgwj" [cacc38d2-0ddb-4fad-aab1-9d56fb63e65b] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.019473977s
addons_test.go:415: (dbg) Run:  kubectl --context addons-417518 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.06s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.58s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 7.273273ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-kfkkh" [a5f7ab68-b517-4693-acb4-fc7c512b7d00] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.011792025s
addons_test.go:473: (dbg) Run:  kubectl --context addons-417518 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-417518 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.903276452s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (96.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 30.529394ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-417518 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-417518 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5f3d5733-dfac-4f5e-9872-b03d3ccb745b] Pending
helpers_test.go:344: "task-pv-pod" [5f3d5733-dfac-4f5e-9872-b03d3ccb745b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5f3d5733-dfac-4f5e-9872-b03d3ccb745b] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.005170283s
addons_test.go:584: (dbg) Run:  kubectl --context addons-417518 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-417518 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-417518 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-417518 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-417518 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-417518 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-417518 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1d5c4875-7d94-461e-85dc-57b6ddc861de] Pending
helpers_test.go:344: "task-pv-pod-restore" [1d5c4875-7d94-461e-85dc-57b6ddc861de] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1d5c4875-7d94-461e-85dc-57b6ddc861de] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004438523s
addons_test.go:626: (dbg) Run:  kubectl --context addons-417518 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-417518 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-417518 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-417518 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.845926657s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-417518 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (96.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-417518 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-417518 --alsologtostderr -v=1: (1.680732101s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-f4qhq" [f7d385c9-f32a-465c-8a10-f00b1b199d34] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-f4qhq" [f7d385c9-f32a-465c-8a10-f00b1b199d34] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003907494s
--- PASS: TestAddons/parallel/Headlamp (15.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-4l8zc" [c170ce4c-1ea9-4ca7-975d-db840bf34a91] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005520729s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-417518
--- PASS: TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fhphr" [f86f2776-fb1d-4a75-8d29-8fcb306bd7cf] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006763654s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-417518
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-gn4q5" [786efbea-f92d-4fb6-ab90-454c08ba2467] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004976546s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-417518 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-417518 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (52.83s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-223082 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-223082 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (51.189013494s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-223082 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-223082 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-223082 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-223082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-223082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-223082: (1.083122311s)
--- PASS: TestCertOptions (52.83s)

                                                
                                    
x
+
TestCertExpiration (339.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-523607 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-523607 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (54.864414565s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-523607 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-523607 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m43.260667459s)
helpers_test.go:175: Cleaning up "cert-expiration-523607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-523607
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-523607: (1.193235874s)
--- PASS: TestCertExpiration (339.32s)

                                                
                                    
x
+
TestForceSystemdFlag (75.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-599794 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0108 22:02:44.575101  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-599794 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.860958279s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-599794 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-599794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-599794
--- PASS: TestForceSystemdFlag (75.99s)

                                                
                                    
x
+
TestForceSystemdEnv (85.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-827712 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-827712 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.820962105s)
helpers_test.go:175: Cleaning up "force-systemd-env-827712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-827712
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-827712: (1.102038458s)
--- PASS: TestForceSystemdEnv (85.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.49s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.49s)

                                                
                                    
x
+
TestErrorSpam/setup (47.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-631496 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-631496 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-631496 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-631496 --driver=kvm2  --container-runtime=crio: (47.587099247s)
--- PASS: TestErrorSpam/setup (47.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (2.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 stop: (2.100202912s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-631496 --log_dir /tmp/nospam-631496 stop
--- PASS: TestErrorSpam/stop (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17866-334768/.minikube/files/etc/test/nested/copy/341982/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-848083 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-848083 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.772110088s)
--- PASS: TestFunctional/serial/StartWithProxy (99.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-848083 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-848083 --alsologtostderr -v=8: (41.435703608s)
functional_test.go:659: soft start took 41.436470046s for "functional-848083" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-848083 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 cache add registry.k8s.io/pause:3.1: (1.039354948s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 cache add registry.k8s.io/pause:3.3: (1.064136247s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 cache add registry.k8s.io/pause:latest: (1.098684519s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-848083 /tmp/TestFunctionalserialCacheCmdcacheadd_local334889404/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cache add minikube-local-cache-test:functional-848083
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 cache add minikube-local-cache-test:functional-848083: (1.174163482s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cache delete minikube-local-cache-test:functional-848083
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-848083
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (241.305118ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 kubectl -- --context functional-848083 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-848083 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-848083 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-848083 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.701196119s)
functional_test.go:757: restart took 32.701349136s for "functional-848083" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-848083 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 logs: (1.508222021s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 logs --file /tmp/TestFunctionalserialLogsFileCmd3227678695/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 logs --file /tmp/TestFunctionalserialLogsFileCmd3227678695/001/logs.txt: (1.551088742s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-848083 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-848083
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-848083: exit status 115 (310.806392ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.208:30535 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-848083 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-848083 delete -f testdata/invalidsvc.yaml: (1.077264541s)
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 config get cpus: exit status 14 (71.58504ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 config get cpus: exit status 14 (66.072322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-848083 --alsologtostderr -v=1]
E0108 21:15:17.580953  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-848083 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 349823: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-848083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-848083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (155.157089ms)

                                                
                                                
-- stdout --
	* [functional-848083] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:15:16.454049  349517 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:15:16.454222  349517 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:15:16.454237  349517 out.go:309] Setting ErrFile to fd 2...
	I0108 21:15:16.454246  349517 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:15:16.454463  349517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:15:16.455099  349517 out.go:303] Setting JSON to false
	I0108 21:15:16.456137  349517 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7043,"bootTime":1704741474,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:15:16.456210  349517 start.go:138] virtualization: kvm guest
	I0108 21:15:16.458833  349517 out.go:177] * [functional-848083] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:15:16.460473  349517 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:15:16.461922  349517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:15:16.460549  349517 notify.go:220] Checking for updates...
	I0108 21:15:16.463754  349517 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:15:16.465584  349517 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:15:16.467349  349517 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:15:16.468904  349517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:15:16.470929  349517 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:15:16.471510  349517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:15:16.471592  349517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:16.486802  349517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0108 21:15:16.487235  349517 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:16.487802  349517 main.go:141] libmachine: Using API Version  1
	I0108 21:15:16.487828  349517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:16.488252  349517 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:16.488443  349517 main.go:141] libmachine: (functional-848083) Calling .DriverName
	I0108 21:15:16.488746  349517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:15:16.489127  349517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:15:16.489174  349517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:16.503904  349517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35233
	I0108 21:15:16.504339  349517 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:16.504882  349517 main.go:141] libmachine: Using API Version  1
	I0108 21:15:16.504909  349517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:16.505310  349517 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:16.505518  349517 main.go:141] libmachine: (functional-848083) Calling .DriverName
	I0108 21:15:16.538438  349517 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 21:15:16.540163  349517 start.go:298] selected driver: kvm2
	I0108 21:15:16.540183  349517 start.go:902] validating driver "kvm2" against &{Name:functional-848083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-848083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.208 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:15:16.540374  349517 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:15:16.542873  349517 out.go:177] 
	W0108 21:15:16.544483  349517 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 21:15:16.545819  349517 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-848083 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-848083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-848083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (159.868093ms)

                                                
                                                
-- stdout --
	* [functional-848083] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:15:16.759528  349573 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:15:16.759655  349573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:15:16.759666  349573 out.go:309] Setting ErrFile to fd 2...
	I0108 21:15:16.759673  349573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:15:16.759976  349573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:15:16.760564  349573 out.go:303] Setting JSON to false
	I0108 21:15:16.761653  349573 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7043,"bootTime":1704741474,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:15:16.761751  349573 start.go:138] virtualization: kvm guest
	I0108 21:15:16.764058  349573 out.go:177] * [functional-848083] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 21:15:16.765640  349573 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:15:16.767006  349573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:15:16.765731  349573 notify.go:220] Checking for updates...
	I0108 21:15:16.768496  349573 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 21:15:16.769941  349573 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 21:15:16.771408  349573 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:15:16.772818  349573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:15:16.774580  349573 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:15:16.775074  349573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:15:16.775132  349573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:16.789648  349573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41573
	I0108 21:15:16.790049  349573 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:16.790627  349573 main.go:141] libmachine: Using API Version  1
	I0108 21:15:16.790655  349573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:16.791016  349573 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:16.791175  349573 main.go:141] libmachine: (functional-848083) Calling .DriverName
	I0108 21:15:16.791469  349573 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:15:16.791750  349573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:15:16.791786  349573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:16.810780  349573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40901
	I0108 21:15:16.811229  349573 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:16.811767  349573 main.go:141] libmachine: Using API Version  1
	I0108 21:15:16.811792  349573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:16.812197  349573 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:16.812412  349573 main.go:141] libmachine: (functional-848083) Calling .DriverName
	I0108 21:15:16.849546  349573 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0108 21:15:16.850940  349573 start.go:298] selected driver: kvm2
	I0108 21:15:16.850956  349573 start.go:902] validating driver "kvm2" against &{Name:functional-848083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-848083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.208 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:15:16.851115  349573 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:15:16.853611  349573 out.go:177] 
	W0108 21:15:16.855164  349573 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 21:15:16.856733  349573 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-848083 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-848083 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-c7gfl" [063a5b7e-fde5-4f36-973d-1fb99c99a9ab] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-c7gfl" [063a5b7e-fde5-4f36-973d-1fb99c99a9ab] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004396102s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.208:31892
functional_test.go:1674: http://192.168.50.208:31892: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-c7gfl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.208:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.208:31892
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [36e61d39-5196-4441-8144-d72fff17162a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0048652s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-848083 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-848083 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-848083 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-848083 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-848083 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c2074705-1550-4343-b72c-2cb85bafe5e0] Pending
helpers_test.go:344: "sp-pod" [c2074705-1550-4343-b72c-2cb85bafe5e0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0108 21:14:56.854793  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:14:56.860883  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:14:56.871185  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:14:56.891510  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:14:56.931921  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [c2074705-1550-4343-b72c-2cb85bafe5e0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.010013732s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-848083 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-848083 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-848083 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [16e89188-4ba6-40f0-8eca-11b923d83fbd] Pending
helpers_test.go:344: "sp-pod" [16e89188-4ba6-40f0-8eca-11b923d83fbd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [16e89188-4ba6-40f0-8eca-11b923d83fbd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.007675183s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-848083 exec sp-pod -- ls /tmp/mount
E0108 21:15:38.061676  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
2024/01/08 21:15:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh -n functional-848083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cp functional-848083:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1497745626/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh -n functional-848083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh -n functional-848083 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-848083 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-68wjs" [63bccb6d-fae4-4aed-8ef1-630dd1908c5c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-68wjs" [63bccb6d-fae4-4aed-8ef1-630dd1908c5c] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.004240291s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-848083 exec mysql-859648c796-68wjs -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-848083 exec mysql-859648c796-68wjs -- mysql -ppassword -e "show databases;": exit status 1 (191.213602ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-848083 exec mysql-859648c796-68wjs -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-848083 exec mysql-859648c796-68wjs -- mysql -ppassword -e "show databases;": exit status 1 (148.556456ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-848083 exec mysql-859648c796-68wjs -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/341982/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo cat /etc/test/nested/copy/341982/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/341982.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo cat /etc/ssl/certs/341982.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/341982.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo cat /usr/share/ca-certificates/341982.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3419822.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo cat /etc/ssl/certs/3419822.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/3419822.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo cat /usr/share/ca-certificates/3419822.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-848083 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 ssh "sudo systemctl is-active docker": exit status 1 (268.178998ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 ssh "sudo systemctl is-active containerd": exit status 1 (289.153498ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-848083 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-848083 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-j9m98" [2fa09729-823f-4cca-98c6-de9223218548] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-j9m98" [2fa09729-823f-4cca-98c6-de9223218548] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.006953473s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 service list
E0108 21:14:57.012380  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:14:57.172868  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:14:57.493942  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 service list -o json
functional_test.go:1493: Took "511.651949ms" to run "out/minikube-linux-amd64 -p functional-848083 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 service --namespace=default --https --url hello-node
E0108 21:14:58.134987  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
functional_test.go:1521: found endpoint: https://192.168.50.208:30945
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.208:30945
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdany-port4174714544/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704748499632612799" to /tmp/TestFunctionalparallelMountCmdany-port4174714544/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704748499632612799" to /tmp/TestFunctionalparallelMountCmdany-port4174714544/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704748499632612799" to /tmp/TestFunctionalparallelMountCmdany-port4174714544/001/test-1704748499632612799
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T /mount-9p | grep 9p"
E0108 21:14:59.658958  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.286557ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 21:14 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 21:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 21:14 test-1704748499632612799
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh cat /mount-9p/test-1704748499632612799
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-848083 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2b0d2477-a346-4ddf-b4a9-5153d90a0901] Pending
E0108 21:15:02.219731  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [2b0d2477-a346-4ddf-b4a9-5153d90a0901] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0108 21:15:07.340043  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [2b0d2477-a346-4ddf-b4a9-5153d90a0901] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2b0d2477-a346-4ddf-b4a9-5153d90a0901] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.004641015s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-848083 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdany-port4174714544/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "278.890775ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "75.412717ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "334.875489ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "95.825747ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-848083 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-848083
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-848083
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-848083 image ls --format short --alsologtostderr:
I0108 21:15:26.454196  350433 out.go:296] Setting OutFile to fd 1 ...
I0108 21:15:26.454359  350433 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:26.454373  350433 out.go:309] Setting ErrFile to fd 2...
I0108 21:15:26.454380  350433 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:26.454655  350433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
I0108 21:15:26.455273  350433 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:26.455434  350433 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:26.456073  350433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:26.456115  350433 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:26.470938  350433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41285
I0108 21:15:26.471416  350433 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:26.471927  350433 main.go:141] libmachine: Using API Version  1
I0108 21:15:26.471944  350433 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:26.472338  350433 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:26.472586  350433 main.go:141] libmachine: (functional-848083) Calling .GetState
I0108 21:15:26.474481  350433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:26.474532  350433 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:26.493378  350433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
I0108 21:15:26.493836  350433 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:26.494320  350433 main.go:141] libmachine: Using API Version  1
I0108 21:15:26.494345  350433 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:26.494718  350433 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:26.494900  350433 main.go:141] libmachine: (functional-848083) Calling .DriverName
I0108 21:15:26.495099  350433 ssh_runner.go:195] Run: systemctl --version
I0108 21:15:26.495121  350433 main.go:141] libmachine: (functional-848083) Calling .GetSSHHostname
I0108 21:15:26.498030  350433 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:26.498428  350433 main.go:141] libmachine: (functional-848083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:9c:95", ip: ""} in network mk-functional-848083: {Iface:virbr1 ExpiryTime:2024-01-08 22:11:51 +0000 UTC Type:0 Mac:52:54:00:7a:9c:95 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:functional-848083 Clientid:01:52:54:00:7a:9c:95}
I0108 21:15:26.498462  350433 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined IP address 192.168.50.208 and MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:26.498543  350433 main.go:141] libmachine: (functional-848083) Calling .GetSSHPort
I0108 21:15:26.498725  350433 main.go:141] libmachine: (functional-848083) Calling .GetSSHKeyPath
I0108 21:15:26.498893  350433 main.go:141] libmachine: (functional-848083) Calling .GetSSHUsername
I0108 21:15:26.499048  350433 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/functional-848083/id_rsa Username:docker}
I0108 21:15:26.615781  350433 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 21:15:26.700221  350433 main.go:141] libmachine: Making call to close driver server
I0108 21:15:26.700241  350433 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:26.700569  350433 main.go:141] libmachine: (functional-848083) DBG | Closing plugin on server side
I0108 21:15:26.700660  350433 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:26.700677  350433 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 21:15:26.700689  350433 main.go:141] libmachine: Making call to close driver server
I0108 21:15:26.700703  350433 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:26.700934  350433 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:26.700953  350433 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 21:15:26.700984  350433 main.go:141] libmachine: (functional-848083) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-848083 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-848083  | 2dcd8a30015d0 | 3.35kB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-848083  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-848083 image ls --format table --alsologtostderr:
I0108 21:15:26.817364  350490 out.go:296] Setting OutFile to fd 1 ...
I0108 21:15:26.817499  350490 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:26.817510  350490 out.go:309] Setting ErrFile to fd 2...
I0108 21:15:26.817518  350490 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:26.817798  350490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
I0108 21:15:26.818606  350490 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:26.818756  350490 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:26.819333  350490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:26.819416  350490 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:26.835118  350490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35325
I0108 21:15:26.835625  350490 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:26.836274  350490 main.go:141] libmachine: Using API Version  1
I0108 21:15:26.836302  350490 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:26.836688  350490 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:26.837017  350490 main.go:141] libmachine: (functional-848083) Calling .GetState
I0108 21:15:26.839037  350490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:26.839080  350490 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:26.855077  350490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
I0108 21:15:26.855595  350490 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:26.856087  350490 main.go:141] libmachine: Using API Version  1
I0108 21:15:26.856106  350490 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:26.856467  350490 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:26.856586  350490 main.go:141] libmachine: (functional-848083) Calling .DriverName
I0108 21:15:26.856759  350490 ssh_runner.go:195] Run: systemctl --version
I0108 21:15:26.856792  350490 main.go:141] libmachine: (functional-848083) Calling .GetSSHHostname
I0108 21:15:26.859766  350490 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:26.860162  350490 main.go:141] libmachine: (functional-848083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:9c:95", ip: ""} in network mk-functional-848083: {Iface:virbr1 ExpiryTime:2024-01-08 22:11:51 +0000 UTC Type:0 Mac:52:54:00:7a:9c:95 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:functional-848083 Clientid:01:52:54:00:7a:9c:95}
I0108 21:15:26.860183  350490 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined IP address 192.168.50.208 and MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:26.860454  350490 main.go:141] libmachine: (functional-848083) Calling .GetSSHPort
I0108 21:15:26.860631  350490 main.go:141] libmachine: (functional-848083) Calling .GetSSHKeyPath
I0108 21:15:26.860742  350490 main.go:141] libmachine: (functional-848083) Calling .GetSSHUsername
I0108 21:15:26.861058  350490 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/functional-848083/id_rsa Username:docker}
I0108 21:15:26.971509  350490 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 21:15:27.043278  350490 main.go:141] libmachine: Making call to close driver server
I0108 21:15:27.043300  350490 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:27.043602  350490 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:27.043618  350490 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 21:15:27.043635  350490 main.go:141] libmachine: Making call to close driver server
I0108 21:15:27.043644  350490 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:27.043881  350490 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:27.043904  350490 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 21:15:27.043979  350490 main.go:141] libmachine: (functional-848083) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-848083 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"
repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"
],"size":"53621675"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d876
8d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-848083"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5e
f9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820
c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"2dcd8a30015d0519eaa328c000197fcecb030de2bd52d7ee5715ef30e4672fe7","repoDigests":["localhost/minikube-local-cache-test@sha256:208b62933f86a063e32a851599abb8ea5cb0038a64e5a684d1abceb253839cd8"],"repoTags":["localhost/minikube-local-cache-test:functional-848083"],"size":"3345"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","
repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-848083 image ls --format json --alsologtostderr:
I0108 21:15:26.791093  350480 out.go:296] Setting OutFile to fd 1 ...
I0108 21:15:26.791393  350480 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:26.791408  350480 out.go:309] Setting ErrFile to fd 2...
I0108 21:15:26.791416  350480 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:26.792019  350480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
I0108 21:15:26.792942  350480 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:26.793106  350480 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:26.793745  350480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:26.793809  350480 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:26.818489  350480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
I0108 21:15:26.819002  350480 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:26.819841  350480 main.go:141] libmachine: Using API Version  1
I0108 21:15:26.819871  350480 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:26.820286  350480 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:26.820484  350480 main.go:141] libmachine: (functional-848083) Calling .GetState
I0108 21:15:26.822744  350480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:26.822787  350480 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:26.842322  350480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
I0108 21:15:26.842783  350480 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:26.843309  350480 main.go:141] libmachine: Using API Version  1
I0108 21:15:26.843336  350480 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:26.843706  350480 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:26.843929  350480 main.go:141] libmachine: (functional-848083) Calling .DriverName
I0108 21:15:26.844116  350480 ssh_runner.go:195] Run: systemctl --version
I0108 21:15:26.844148  350480 main.go:141] libmachine: (functional-848083) Calling .GetSSHHostname
I0108 21:15:26.847251  350480 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:26.847563  350480 main.go:141] libmachine: (functional-848083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:9c:95", ip: ""} in network mk-functional-848083: {Iface:virbr1 ExpiryTime:2024-01-08 22:11:51 +0000 UTC Type:0 Mac:52:54:00:7a:9c:95 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:functional-848083 Clientid:01:52:54:00:7a:9c:95}
I0108 21:15:26.847596  350480 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined IP address 192.168.50.208 and MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:26.847748  350480 main.go:141] libmachine: (functional-848083) Calling .GetSSHPort
I0108 21:15:26.847937  350480 main.go:141] libmachine: (functional-848083) Calling .GetSSHKeyPath
I0108 21:15:26.848136  350480 main.go:141] libmachine: (functional-848083) Calling .GetSSHUsername
I0108 21:15:26.848248  350480 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/functional-848083/id_rsa Username:docker}
I0108 21:15:26.953946  350480 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 21:15:27.037279  350480 main.go:141] libmachine: Making call to close driver server
I0108 21:15:27.037296  350480 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:27.037594  350480 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:27.037613  350480 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 21:15:27.037628  350480 main.go:141] libmachine: Making call to close driver server
I0108 21:15:27.037637  350480 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:27.037634  350480 main.go:141] libmachine: (functional-848083) DBG | Closing plugin on server side
I0108 21:15:27.038026  350480 main.go:141] libmachine: (functional-848083) DBG | Closing plugin on server side
I0108 21:15:27.038032  350480 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:27.038066  350480 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-848083 image ls --format yaml --alsologtostderr:
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 2dcd8a30015d0519eaa328c000197fcecb030de2bd52d7ee5715ef30e4672fe7
repoDigests:
- localhost/minikube-local-cache-test@sha256:208b62933f86a063e32a851599abb8ea5cb0038a64e5a684d1abceb253839cd8
repoTags:
- localhost/minikube-local-cache-test:functional-848083
size: "3345"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-848083
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-848083 image ls --format yaml --alsologtostderr:
I0108 21:15:26.449547  350432 out.go:296] Setting OutFile to fd 1 ...
I0108 21:15:26.449835  350432 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:26.449844  350432 out.go:309] Setting ErrFile to fd 2...
I0108 21:15:26.449849  350432 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:26.450057  350432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
I0108 21:15:26.450673  350432 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:26.450781  350432 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:26.451214  350432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:26.451278  350432 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:26.466408  350432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
I0108 21:15:26.466906  350432 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:26.467519  350432 main.go:141] libmachine: Using API Version  1
I0108 21:15:26.467542  350432 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:26.467884  350432 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:26.468144  350432 main.go:141] libmachine: (functional-848083) Calling .GetState
I0108 21:15:26.470230  350432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:26.470270  350432 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:26.485871  350432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
I0108 21:15:26.486342  350432 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:26.486888  350432 main.go:141] libmachine: Using API Version  1
I0108 21:15:26.486919  350432 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:26.487413  350432 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:26.487662  350432 main.go:141] libmachine: (functional-848083) Calling .DriverName
I0108 21:15:26.487893  350432 ssh_runner.go:195] Run: systemctl --version
I0108 21:15:26.487942  350432 main.go:141] libmachine: (functional-848083) Calling .GetSSHHostname
I0108 21:15:26.490821  350432 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:26.491321  350432 main.go:141] libmachine: (functional-848083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:9c:95", ip: ""} in network mk-functional-848083: {Iface:virbr1 ExpiryTime:2024-01-08 22:11:51 +0000 UTC Type:0 Mac:52:54:00:7a:9c:95 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:functional-848083 Clientid:01:52:54:00:7a:9c:95}
I0108 21:15:26.491399  350432 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined IP address 192.168.50.208 and MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:26.491502  350432 main.go:141] libmachine: (functional-848083) Calling .GetSSHPort
I0108 21:15:26.491673  350432 main.go:141] libmachine: (functional-848083) Calling .GetSSHKeyPath
I0108 21:15:26.491821  350432 main.go:141] libmachine: (functional-848083) Calling .GetSSHUsername
I0108 21:15:26.491939  350432 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/functional-848083/id_rsa Username:docker}
I0108 21:15:26.637945  350432 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 21:15:26.736220  350432 main.go:141] libmachine: Making call to close driver server
I0108 21:15:26.736241  350432 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:26.736576  350432 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:26.736597  350432 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 21:15:26.736608  350432 main.go:141] libmachine: Making call to close driver server
I0108 21:15:26.736617  350432 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:26.736635  350432 main.go:141] libmachine: (functional-848083) DBG | Closing plugin on server side
I0108 21:15:26.736877  350432 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:26.736897  350432 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 ssh pgrep buildkitd: exit status 1 (237.836249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image build -t localhost/my-image:functional-848083 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 image build -t localhost/my-image:functional-848083 testdata/build --alsologtostderr: (5.0484698s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-848083 image build -t localhost/my-image:functional-848083 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c59b3e6b05b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-848083
--> dd9b5fa55b8
Successfully tagged localhost/my-image:functional-848083
dd9b5fa55b8e09da40e5067d4525c25daeaab0cf9a87fc6ff7900e7644277fbb
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-848083 image build -t localhost/my-image:functional-848083 testdata/build --alsologtostderr:
I0108 21:15:27.341894  350555 out.go:296] Setting OutFile to fd 1 ...
I0108 21:15:27.342156  350555 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:27.342167  350555 out.go:309] Setting ErrFile to fd 2...
I0108 21:15:27.342172  350555 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:15:27.342386  350555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
I0108 21:15:27.343005  350555 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:27.343637  350555 config.go:182] Loaded profile config "functional-848083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 21:15:27.344074  350555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:27.344138  350555 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:27.359071  350555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
I0108 21:15:27.359603  350555 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:27.360319  350555 main.go:141] libmachine: Using API Version  1
I0108 21:15:27.360374  350555 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:27.360811  350555 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:27.361019  350555 main.go:141] libmachine: (functional-848083) Calling .GetState
I0108 21:15:27.362975  350555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 21:15:27.363019  350555 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:15:27.378180  350555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
I0108 21:15:27.378591  350555 main.go:141] libmachine: () Calling .GetVersion
I0108 21:15:27.379122  350555 main.go:141] libmachine: Using API Version  1
I0108 21:15:27.379150  350555 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:15:27.379648  350555 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:15:27.379842  350555 main.go:141] libmachine: (functional-848083) Calling .DriverName
I0108 21:15:27.380092  350555 ssh_runner.go:195] Run: systemctl --version
I0108 21:15:27.380121  350555 main.go:141] libmachine: (functional-848083) Calling .GetSSHHostname
I0108 21:15:27.382936  350555 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:27.383393  350555 main.go:141] libmachine: (functional-848083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:9c:95", ip: ""} in network mk-functional-848083: {Iface:virbr1 ExpiryTime:2024-01-08 22:11:51 +0000 UTC Type:0 Mac:52:54:00:7a:9c:95 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:functional-848083 Clientid:01:52:54:00:7a:9c:95}
I0108 21:15:27.383428  350555 main.go:141] libmachine: (functional-848083) DBG | domain functional-848083 has defined IP address 192.168.50.208 and MAC address 52:54:00:7a:9c:95 in network mk-functional-848083
I0108 21:15:27.383559  350555 main.go:141] libmachine: (functional-848083) Calling .GetSSHPort
I0108 21:15:27.383754  350555 main.go:141] libmachine: (functional-848083) Calling .GetSSHKeyPath
I0108 21:15:27.383933  350555 main.go:141] libmachine: (functional-848083) Calling .GetSSHUsername
I0108 21:15:27.384131  350555 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/functional-848083/id_rsa Username:docker}
I0108 21:15:27.523173  350555 build_images.go:151] Building image from path: /tmp/build.4157290937.tar
I0108 21:15:27.523267  350555 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 21:15:27.560475  350555 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4157290937.tar
I0108 21:15:27.578148  350555 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4157290937.tar: stat -c "%s %y" /var/lib/minikube/build/build.4157290937.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4157290937.tar': No such file or directory
I0108 21:15:27.578194  350555 ssh_runner.go:362] scp /tmp/build.4157290937.tar --> /var/lib/minikube/build/build.4157290937.tar (3072 bytes)
I0108 21:15:27.650251  350555 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4157290937
I0108 21:15:27.670298  350555 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4157290937 -xf /var/lib/minikube/build/build.4157290937.tar
I0108 21:15:27.684875  350555 crio.go:297] Building image: /var/lib/minikube/build/build.4157290937
I0108 21:15:27.684943  350555 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-848083 /var/lib/minikube/build/build.4157290937 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0108 21:15:32.286094  350555 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-848083 /var/lib/minikube/build/build.4157290937 --cgroup-manager=cgroupfs: (4.601114148s)
I0108 21:15:32.286183  350555 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4157290937
I0108 21:15:32.307134  350555 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4157290937.tar
I0108 21:15:32.324350  350555 build_images.go:207] Built localhost/my-image:functional-848083 from /tmp/build.4157290937.tar
I0108 21:15:32.324383  350555 build_images.go:123] succeeded building to: functional-848083
I0108 21:15:32.324388  350555 build_images.go:124] failed building to: 
I0108 21:15:32.324432  350555 main.go:141] libmachine: Making call to close driver server
I0108 21:15:32.324446  350555 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:32.324750  350555 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:32.324772  350555 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 21:15:32.324784  350555 main.go:141] libmachine: Making call to close driver server
I0108 21:15:32.324794  350555 main.go:141] libmachine: (functional-848083) Calling .Close
I0108 21:15:32.325064  350555 main.go:141] libmachine: Successfully made call to close driver server
I0108 21:15:32.325086  350555 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 21:15:32.325106  350555 main.go:141] libmachine: (functional-848083) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.005285032s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-848083
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image load --daemon gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 image load --daemon gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr: (6.158394153s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image load --daemon gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 image load --daemon gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr: (5.452919282s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-848083
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image load --daemon gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 image load --daemon gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr: (4.255507353s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdspecific-port1856801754/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.141412ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdspecific-port1856801754/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 ssh "sudo umount -f /mount-9p": exit status 1 (273.365214ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-848083 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdspecific-port1856801754/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image save gcr.io/google-containers/addon-resizer:functional-848083 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 image save gcr.io/google-containers/addon-resizer:functional-848083 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.500188028s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image rm gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076446512/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076446512/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076446512/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T" /mount1: exit status 1 (314.518561ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-848083 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076446512/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076446512/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-848083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4076446512/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.872371711s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-848083
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-848083 image save --daemon gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-848083 image save --daemon gcr.io/google-containers/addon-resizer:functional-848083 --alsologtostderr: (2.243995053s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-848083
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.28s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-848083
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-848083
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-848083
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (110.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-798925 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0108 21:16:19.022035  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-798925 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m50.401970068s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (110.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-798925 addons enable ingress --alsologtostderr -v=5
E0108 21:17:40.943247  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-798925 addons enable ingress --alsologtostderr -v=5: (13.515236647s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-798925 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-766057 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0108 21:21:06.886786  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-766057 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.820690649s)
--- PASS: TestJSONOutput/start/Command (100.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-766057 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-766057 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-766057 --output=json --user=testUser
E0108 21:22:28.807311  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-766057 --output=json --user=testUser: (7.108965187s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-337629 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-337629 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.023685ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"59bbcbd8-324e-4145-a0c2-734a6528c77a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-337629] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a089d2c5-e2a1-453e-9111-0430ff14fbf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17866"}}
	{"specversion":"1.0","id":"cd875396-95c4-4a56-bf0e-2bd971a985f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2b61bc32-7810-4838-8296-22bbe440a2ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig"}}
	{"specversion":"1.0","id":"3b0b1b71-df50-4536-a8e8-af1c652fd1df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube"}}
	{"specversion":"1.0","id":"541263e3-24ed-4889-9ac3-df5e85d88823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9cdb4fd0-5d7c-4526-9955-e543fdd5b7b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"40a28ee3-37c0-4429-878c-021fc00e13e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-337629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-337629
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (100.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-455917 --driver=kvm2  --container-runtime=crio
E0108 21:22:44.574694  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:44.580047  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:44.590328  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:44.610730  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:44.651063  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:44.731481  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:44.891965  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:45.212701  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:45.853749  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:47.134663  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:49.695560  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:22:54.816227  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:23:05.056725  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-455917 --driver=kvm2  --container-runtime=crio: (47.306767848s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-458139 --driver=kvm2  --container-runtime=crio
E0108 21:23:25.537297  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:24:06.498314  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-458139 --driver=kvm2  --container-runtime=crio: (50.320167469s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-455917
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-458139
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-458139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-458139
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-458139: (1.001798599s)
helpers_test.go:175: Cleaning up "first-455917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-455917
--- PASS: TestMinikubeProfile (100.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-153442 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-153442 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.136794439s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-153442 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-153442 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (33.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-169549 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0108 21:24:44.964338  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:24:56.855100  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:25:12.651541  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-169549 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.6239003s)
--- PASS: TestMountStart/serial/StartWithMountSecond (33.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169549 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169549 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-153442 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169549 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169549 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-169549
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-169549: (1.227261418s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-169549
E0108 21:25:28.419015  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-169549: (22.272980438s)
--- PASS: TestMountStart/serial/RestartStopped (23.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169549 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169549 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962345 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-962345 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.16264023s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-962345 -- rollout status deployment/busybox: (2.11039317s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-qwxd6 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-wmznk -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-qwxd6 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-wmznk -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-qwxd6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-962345 -- exec busybox-5bc68d56bd-wmznk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-962345 -v 3 --alsologtostderr
E0108 21:27:44.575071  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:28:12.259699  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-962345 -v 3 --alsologtostderr: (46.382913535s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.98s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-962345 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp testdata/cp-test.txt multinode-962345:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2245121153/001/cp-test_multinode-962345.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345:/home/docker/cp-test.txt multinode-962345-m02:/home/docker/cp-test_multinode-962345_multinode-962345-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m02 "sudo cat /home/docker/cp-test_multinode-962345_multinode-962345-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345:/home/docker/cp-test.txt multinode-962345-m03:/home/docker/cp-test_multinode-962345_multinode-962345-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m03 "sudo cat /home/docker/cp-test_multinode-962345_multinode-962345-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp testdata/cp-test.txt multinode-962345-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2245121153/001/cp-test_multinode-962345-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345-m02:/home/docker/cp-test.txt multinode-962345:/home/docker/cp-test_multinode-962345-m02_multinode-962345.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345 "sudo cat /home/docker/cp-test_multinode-962345-m02_multinode-962345.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345-m02:/home/docker/cp-test.txt multinode-962345-m03:/home/docker/cp-test_multinode-962345-m02_multinode-962345-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m03 "sudo cat /home/docker/cp-test_multinode-962345-m02_multinode-962345-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp testdata/cp-test.txt multinode-962345-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2245121153/001/cp-test_multinode-962345-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345-m03:/home/docker/cp-test.txt multinode-962345:/home/docker/cp-test_multinode-962345-m03_multinode-962345.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345 "sudo cat /home/docker/cp-test_multinode-962345-m03_multinode-962345.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 cp multinode-962345-m03:/home/docker/cp-test.txt multinode-962345-m02:/home/docker/cp-test_multinode-962345-m03_multinode-962345-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 ssh -n multinode-962345-m02 "sudo cat /home/docker/cp-test_multinode-962345-m03_multinode-962345-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-962345 node stop m03: (2.104714794s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-962345 status: exit status 7 (443.466852ms)

                                                
                                                
-- stdout --
	multinode-962345
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-962345-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-962345-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-962345 status --alsologtostderr: exit status 7 (456.052072ms)

                                                
                                                
-- stdout --
	multinode-962345
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-962345-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-962345-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:28:34.666569  357917 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:28:34.666691  357917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:28:34.666700  357917 out.go:309] Setting ErrFile to fd 2...
	I0108 21:28:34.666705  357917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:28:34.666889  357917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 21:28:34.667053  357917 out.go:303] Setting JSON to false
	I0108 21:28:34.667097  357917 mustload.go:65] Loading cluster: multinode-962345
	I0108 21:28:34.667194  357917 notify.go:220] Checking for updates...
	I0108 21:28:34.667529  357917 config.go:182] Loaded profile config "multinode-962345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:28:34.667558  357917 status.go:255] checking status of multinode-962345 ...
	I0108 21:28:34.668131  357917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:28:34.668199  357917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:28:34.686343  357917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0108 21:28:34.686856  357917 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:28:34.687485  357917 main.go:141] libmachine: Using API Version  1
	I0108 21:28:34.687510  357917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:28:34.687973  357917 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:28:34.688190  357917 main.go:141] libmachine: (multinode-962345) Calling .GetState
	I0108 21:28:34.690049  357917 status.go:330] multinode-962345 host status = "Running" (err=<nil>)
	I0108 21:28:34.690067  357917 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:28:34.690385  357917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:28:34.690423  357917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:28:34.705771  357917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0108 21:28:34.706178  357917 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:28:34.706571  357917 main.go:141] libmachine: Using API Version  1
	I0108 21:28:34.706592  357917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:28:34.706891  357917 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:28:34.707082  357917 main.go:141] libmachine: (multinode-962345) Calling .GetIP
	I0108 21:28:34.709627  357917 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:28:34.710058  357917 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:28:34.710188  357917 host.go:66] Checking if "multinode-962345" exists ...
	I0108 21:28:34.710176  357917 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:28:34.710508  357917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:28:34.710556  357917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:28:34.725549  357917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I0108 21:28:34.725974  357917 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:28:34.726408  357917 main.go:141] libmachine: Using API Version  1
	I0108 21:28:34.726428  357917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:28:34.726739  357917 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:28:34.726921  357917 main.go:141] libmachine: (multinode-962345) Calling .DriverName
	I0108 21:28:34.727137  357917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:28:34.727177  357917 main.go:141] libmachine: (multinode-962345) Calling .GetSSHHostname
	I0108 21:28:34.729810  357917 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:28:34.730155  357917 main.go:141] libmachine: (multinode-962345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:54:bf", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:25:59 +0000 UTC Type:0 Mac:52:54:00:cf:54:bf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-962345 Clientid:01:52:54:00:cf:54:bf}
	I0108 21:28:34.730185  357917 main.go:141] libmachine: (multinode-962345) DBG | domain multinode-962345 has defined IP address 192.168.39.239 and MAC address 52:54:00:cf:54:bf in network mk-multinode-962345
	I0108 21:28:34.730303  357917 main.go:141] libmachine: (multinode-962345) Calling .GetSSHPort
	I0108 21:28:34.730501  357917 main.go:141] libmachine: (multinode-962345) Calling .GetSSHKeyPath
	I0108 21:28:34.730619  357917 main.go:141] libmachine: (multinode-962345) Calling .GetSSHUsername
	I0108 21:28:34.730779  357917 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345/id_rsa Username:docker}
	I0108 21:28:34.826101  357917 ssh_runner.go:195] Run: systemctl --version
	I0108 21:28:34.832588  357917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:28:34.847742  357917 kubeconfig.go:92] found "multinode-962345" server: "https://192.168.39.239:8443"
	I0108 21:28:34.847773  357917 api_server.go:166] Checking apiserver status ...
	I0108 21:28:34.847809  357917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:28:34.859636  357917 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1062/cgroup
	I0108 21:28:34.869145  357917 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod6dbed9a3f64fb2ec41dcc39fae30b654/crio-e72dfd688d628935b061a523a4c79256b249f57b06a8f0eb669951cf8fec000b"
	I0108 21:28:34.869219  357917 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod6dbed9a3f64fb2ec41dcc39fae30b654/crio-e72dfd688d628935b061a523a4c79256b249f57b06a8f0eb669951cf8fec000b/freezer.state
	I0108 21:28:34.880124  357917 api_server.go:204] freezer state: "THAWED"
	I0108 21:28:34.880155  357917 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0108 21:28:34.885266  357917 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0108 21:28:34.885290  357917 status.go:421] multinode-962345 apiserver status = Running (err=<nil>)
	I0108 21:28:34.885299  357917 status.go:257] multinode-962345 status: &{Name:multinode-962345 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:28:34.885320  357917 status.go:255] checking status of multinode-962345-m02 ...
	I0108 21:28:34.885656  357917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:28:34.885697  357917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:28:34.900509  357917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0108 21:28:34.900917  357917 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:28:34.901403  357917 main.go:141] libmachine: Using API Version  1
	I0108 21:28:34.901424  357917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:28:34.901798  357917 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:28:34.901988  357917 main.go:141] libmachine: (multinode-962345-m02) Calling .GetState
	I0108 21:28:34.903706  357917 status.go:330] multinode-962345-m02 host status = "Running" (err=<nil>)
	I0108 21:28:34.903729  357917 host.go:66] Checking if "multinode-962345-m02" exists ...
	I0108 21:28:34.904095  357917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:28:34.904163  357917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:28:34.920161  357917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0108 21:28:34.920632  357917 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:28:34.921097  357917 main.go:141] libmachine: Using API Version  1
	I0108 21:28:34.921121  357917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:28:34.921461  357917 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:28:34.921684  357917 main.go:141] libmachine: (multinode-962345-m02) Calling .GetIP
	I0108 21:28:34.924234  357917 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:28:34.924599  357917 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:28:34.924628  357917 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:28:34.924751  357917 host.go:66] Checking if "multinode-962345-m02" exists ...
	I0108 21:28:34.925063  357917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:28:34.925103  357917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:28:34.939696  357917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36059
	I0108 21:28:34.940184  357917 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:28:34.940724  357917 main.go:141] libmachine: Using API Version  1
	I0108 21:28:34.940749  357917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:28:34.941056  357917 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:28:34.941261  357917 main.go:141] libmachine: (multinode-962345-m02) Calling .DriverName
	I0108 21:28:34.941453  357917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:28:34.941472  357917 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHHostname
	I0108 21:28:34.944108  357917 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:28:34.944555  357917 main.go:141] libmachine: (multinode-962345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b0:38", ip: ""} in network mk-multinode-962345: {Iface:virbr1 ExpiryTime:2024-01-08 22:27:04 +0000 UTC Type:0 Mac:52:54:00:3b:b0:38 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:multinode-962345-m02 Clientid:01:52:54:00:3b:b0:38}
	I0108 21:28:34.944593  357917 main.go:141] libmachine: (multinode-962345-m02) DBG | domain multinode-962345-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:3b:b0:38 in network mk-multinode-962345
	I0108 21:28:34.944706  357917 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHPort
	I0108 21:28:34.944858  357917 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHKeyPath
	I0108 21:28:34.945022  357917 main.go:141] libmachine: (multinode-962345-m02) Calling .GetSSHUsername
	I0108 21:28:34.945150  357917 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-334768/.minikube/machines/multinode-962345-m02/id_rsa Username:docker}
	I0108 21:28:35.026536  357917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:28:35.039138  357917 status.go:257] multinode-962345-m02 status: &{Name:multinode-962345-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:28:35.039182  357917 status.go:255] checking status of multinode-962345-m03 ...
	I0108 21:28:35.039579  357917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:28:35.039624  357917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:28:35.055062  357917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42761
	I0108 21:28:35.055561  357917 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:28:35.056040  357917 main.go:141] libmachine: Using API Version  1
	I0108 21:28:35.056063  357917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:28:35.056420  357917 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:28:35.056616  357917 main.go:141] libmachine: (multinode-962345-m03) Calling .GetState
	I0108 21:28:35.058192  357917 status.go:330] multinode-962345-m03 host status = "Stopped" (err=<nil>)
	I0108 21:28:35.058211  357917 status.go:343] host is not running, skipping remaining checks
	I0108 21:28:35.058218  357917 status.go:257] multinode-962345-m03 status: &{Name:multinode-962345-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-962345 node start m03 --alsologtostderr: (28.845834338s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-962345 node delete m03: (1.034873278s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (536.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962345 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0108 21:44:44.964765  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:44:56.855775  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:47:44.574902  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 21:48:00.145217  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 21:49:44.963970  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 21:49:56.855535  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-962345 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (8m56.316306923s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-962345 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (536.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-962345
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962345-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-962345-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (89.798558ms)

                                                
                                                
-- stdout --
	* [multinode-962345-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-962345-m02' is duplicated with machine name 'multinode-962345-m02' in profile 'multinode-962345'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-962345-m03 --driver=kvm2  --container-runtime=crio
E0108 21:52:44.574457  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-962345-m03 --driver=kvm2  --container-runtime=crio: (50.145504742s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-962345
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-962345: exit status 80 (259.796461ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-962345
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-962345-m03 already exists in multinode-962345-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-962345-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-962345-m03: (1.076494775s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (236.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-216954 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0108 21:59:56.854797  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-216954 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.169718796s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-216954
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-216954: (6.125898932s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-216954 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-216954 status --format={{.Host}}: exit status 7 (103.961064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-216954 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-216954 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.858092639s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-216954 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-216954 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-216954 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (155.95292ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-216954] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-216954
	    minikube start -p kubernetes-upgrade-216954 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2169542 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-216954 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-216954 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-216954 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.702043429s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-216954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-216954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-216954: (1.279642775s)
--- PASS: TestKubernetesUpgrade (236.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806144 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-806144 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (117.983792ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-806144] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (117.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806144 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806144 --driver=kvm2  --container-runtime=crio: (1m56.743259317s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-806144 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (117.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806144 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806144 --no-kubernetes --driver=kvm2  --container-runtime=crio: (7.880620419s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-806144 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-806144 status -o json: exit status 2 (298.14469ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-806144","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-806144
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-806144: (1.080314331s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806144 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806144 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.228738295s)
--- PASS: TestNoKubernetes/serial/Start (31.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-806144 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-806144 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.405783ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-806144
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-806144: (2.15969549s)
--- PASS: TestNoKubernetes/serial/Stop (2.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (55.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806144 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806144 --driver=kvm2  --container-runtime=crio: (55.238524088s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (55.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-806144 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-806144 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.131831ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPause/serial/Start (67.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-415665 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-415665 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m7.845003012s)
--- PASS: TestPause/serial/Start (67.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-587823 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-587823 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (1.366704637s)

                                                
                                                
-- stdout --
	* [false-587823] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:03:54.980834  369947 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:03:54.981156  369947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:03:54.981166  369947 out.go:309] Setting ErrFile to fd 2...
	I0108 22:03:54.981171  369947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:03:54.981440  369947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-334768/.minikube/bin
	I0108 22:03:54.982149  369947 out.go:303] Setting JSON to false
	I0108 22:03:54.983314  369947 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9961,"bootTime":1704741474,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:03:54.983442  369947 start.go:138] virtualization: kvm guest
	I0108 22:03:55.065675  369947 out.go:177] * [false-587823] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:03:55.214329  369947 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:03:55.213708  369947 notify.go:220] Checking for updates...
	I0108 22:03:55.268176  369947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:03:55.399193  369947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-334768/kubeconfig
	I0108 22:03:55.468654  369947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-334768/.minikube
	I0108 22:03:55.587327  369947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:03:55.673915  369947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:03:55.749177  369947 config.go:182] Loaded profile config "cert-expiration-523607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:03:55.749500  369947 config.go:182] Loaded profile config "pause-415665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:03:55.749596  369947 config.go:182] Loaded profile config "stopped-upgrade-878657": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 22:03:55.749761  369947 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:03:55.873391  369947 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 22:03:55.989926  369947 start.go:298] selected driver: kvm2
	I0108 22:03:55.989984  369947 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:03:55.990001  369947 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:03:56.067917  369947 out.go:177] 
	W0108 22:03:56.133414  369947 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0108 22:03:56.239681  369947 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-587823 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-587823" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 22:02:01 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.150:8443
name: cert-expiration-523607
contexts:
- context:
cluster: cert-expiration-523607
extensions:
- extension:
last-update: Mon, 08 Jan 2024 22:02:01 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-523607
name: cert-expiration-523607
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-523607
user:
client-certificate: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/cert-expiration-523607/client.crt
client-key: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/cert-expiration-523607/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-587823

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-587823"

                                                
                                                
----------------------- debugLogs end: false-587823 [took: 4.050624021s] --------------------------------
helpers_test.go:175: Cleaning up "false-587823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-587823
--- PASS: TestNetworkPlugins/group/false (5.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (97.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-415665 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0108 22:04:40.146324  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
E0108 22:04:44.964288  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:04:56.854666  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-415665 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m37.500069778s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (97.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-079759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-079759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m44.968271998s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-878657
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (172.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-675668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-675668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m52.609530505s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (172.61s)

                                                
                                    
x
+
TestPause/serial/Pause (1.32s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-415665 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-415665 --alsologtostderr -v=5: (1.320540118s)
--- PASS: TestPause/serial/Pause (1.32s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-415665 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-415665 --output=json --layout=cluster: exit status 2 (297.676938ms)

                                                
                                                
-- stdout --
	{"Name":"pause-415665","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-415665","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.29s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-415665 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-415665 --alsologtostderr -v=5: (1.293445465s)
--- PASS: TestPause/serial/Unpause (1.29s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.51s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-415665 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-415665 --alsologtostderr -v=5: (1.505512531s)
--- PASS: TestPause/serial/PauseAgain (1.51s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-415665 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-415665 --alsologtostderr -v=5: (1.007864316s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (136.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-903819 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-903819 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m16.735578242s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (136.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (135.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-292054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 22:07:44.574865  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-292054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m15.858832882s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (135.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-079759 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dc706965-4d2e-4bd5-a1c1-0616462e9840] Pending
helpers_test.go:344: "busybox" [dc706965-4d2e-4bd5-a1c1-0616462e9840] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dc706965-4d2e-4bd5-a1c1-0616462e9840] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004130075s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-079759 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-079759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-079759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011464037s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-079759 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-675668 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [64a40179-c4d0-4ec5-a8e7-4545bfb97e3d] Pending
helpers_test.go:344: "busybox" [64a40179-c4d0-4ec5-a8e7-4545bfb97e3d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [64a40179-c4d0-4ec5-a8e7-4545bfb97e3d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005591782s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-675668 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-903819 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [df640c3b-acbf-4697-a0b3-d413a383e3f1] Pending
helpers_test.go:344: "busybox" [df640c3b-acbf-4697-a0b3-d413a383e3f1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [df640c3b-acbf-4697-a0b3-d413a383e3f1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005497728s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-903819 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-675668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-675668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.235492333s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-675668 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-903819 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-903819 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.29841645s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-903819 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-292054 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2174377e-5f79-4726-a6f4-716b836ffd20] Pending
helpers_test.go:344: "busybox" [2174377e-5f79-4726-a6f4-716b836ffd20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2174377e-5f79-4726-a6f4-716b836ffd20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.0058051s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-292054 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-292054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-292054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.274078644s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-292054 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (411.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-079759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-079759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (6m50.913058115s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079759 -n old-k8s-version-079759
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (411.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (603.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-675668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-675668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m3.197546553s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-675668 -n no-preload-675668
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (603.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (860.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-903819 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-903819 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m20.420832218s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-903819 -n embed-certs-903819
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (860.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (882.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-292054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 22:12:27.621522  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 22:12:44.574334  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
E0108 22:14:44.964873  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:14:56.855675  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-292054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m42.039269134s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-292054 -n default-k8s-diff-port-292054
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (882.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-154365 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-154365 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m5.717259592s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (113.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m53.680148073s)
--- PASS: TestNetworkPlugins/group/auto/Start (113.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-154365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-154365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.300822978s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m20.290918468s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m36.330294049s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-587823 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-587823 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-587823 replace --force -f testdata/netcat-deployment.yaml: (2.169878365s)
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v86r6" [e021fc4f-8788-4e0e-9669-08748fbc2d05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v86r6" [e021fc4f-8788-4e0e-9669-08748fbc2d05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.006207504s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7g8d7" [772919ff-097d-455e-b6ca-6a378ca0d0d7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00695055s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-587823 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-587823 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-587823 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vnsfz" [626f4f62-ecf9-45ed-834b-91bad7dd257c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 22:37:46.903325  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:46.908708  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:46.919101  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:46.939553  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:46.980321  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:47.060681  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:47.221196  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:47.542222  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:48.183577  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:49.464804  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
E0108 22:37:52.025650  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vnsfz" [626f4f62-ecf9-45ed-834b-91bad7dd257c] Running
E0108 22:37:57.146642  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.005583918s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-587823 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (98.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0108 22:38:07.387071  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m38.251373206s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (98.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (119.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0108 22:38:21.638413  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:21.643794  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:21.654243  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:21.674655  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:21.715021  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:21.795405  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:21.955906  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:22.276568  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:22.917628  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:24.198840  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:26.759568  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:38:27.867650  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/old-k8s-version-079759/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m59.01658081s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (119.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rhzgj" [821ec5cd-1959-4758-838a-8b7f5d8cf3a0] Running
E0108 22:38:31.880004  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00887189s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-587823 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (16.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-587823 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hccpg" [241170fd-16a8-42fd-9e87-20c7deb5aa12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hccpg" [241170fd-16a8-42fd-9e87-20c7deb5aa12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 16.008069671s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (16.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (414.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-154365 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-154365 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (6m54.120561167s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-154365 -n newest-cni-154365
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (414.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-587823 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (340.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0108 22:39:14.269469  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/client.crt: no such file or directory
E0108 22:39:24.510004  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (5m40.341966602s)
--- PASS: TestNetworkPlugins/group/flannel/Start (340.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-587823 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-587823 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t9mtt" [ff6765fe-ceeb-4dbc-9ddb-d3cc1cec52db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 22:39:43.561664  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/no-preload-675668/client.crt: no such file or directory
E0108 22:39:44.964695  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/functional-848083/client.crt: no such file or directory
E0108 22:39:44.990989  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-t9mtt" [ff6765fe-ceeb-4dbc-9ddb-d3cc1cec52db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005091859s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-587823 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (337.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-587823 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (5m37.244793617s)
--- PASS: TestNetworkPlugins/group/bridge/Start (337.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-587823 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-587823 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dvgbw" [7777ae8b-dd04-4569-b0af-76ec871a6996] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dvgbw" [7777ae8b-dd04-4569-b0af-76ec871a6996] Running
E0108 22:40:25.951540  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/default-k8s-diff-port-292054/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005320272s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-587823 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lvg2s" [06650716-69c8-411b-b3a2-d66655752c47] Running
E0108 22:44:56.854130  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/addons-417518/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006022627s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-587823 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-587823 replace --force -f testdata/netcat-deployment.yaml
E0108 22:45:00.301127  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/custom-flannel-587823/client.crt: no such file or directory
net_test.go:149: (dbg) Done: kubectl --context flannel-587823 replace --force -f testdata/netcat-deployment.yaml: (2.161157281s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fdgfq" [8e1d6438-efc7-443e-b22b-eda860eccdc2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fdgfq" [8e1d6438-efc7-443e-b22b-eda860eccdc2] Running
E0108 22:45:12.858349  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/auto-587823/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005867321s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-587823 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-154365 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-154365 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-154365 -n newest-cni-154365
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-154365 -n newest-cni-154365: exit status 2 (291.403424ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-154365 -n newest-cni-154365
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-154365 -n newest-cni-154365: exit status 2 (282.597383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-154365 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-154365 -n newest-cni-154365
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-154365 -n newest-cni-154365
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.71s)
E0108 22:45:58.995595  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/enable-default-cni-587823/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-587823 "pgrep -a kubelet"
E0108 22:45:47.622828  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/ingress-addon-legacy-798925/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (17.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-587823 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xlkkr" [45014db0-1cea-4967-9e24-4997f6c282d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xlkkr" [45014db0-1cea-4967-9e24-4997f6c282d1] Running
E0108 22:46:01.742435  341982 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/custom-flannel-587823/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 17.005059944s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (17.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-587823 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-587823 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (39/306)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
52 TestDockerFlags 0
55 TestDockerEnvContainerd 0
57 TestHyperKitDriverInstallOrUpdate 0
58 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/DockerEnv 0
110 TestFunctional/parallel/PodmanEnv 0
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestGvisorAddon 0
159 TestImageBuild 0
192 TestKicCustomNetwork 0
193 TestKicExistingNetwork 0
194 TestKicCustomSubnet 0
195 TestKicStaticIP 0
227 TestChangeNoneUser 0
230 TestScheduledStopWindows 0
232 TestSkaffold 0
234 TestInsufficientStorage 0
238 TestMissingContainerUpgrade 0
254 TestStartStop/group/disable-driver-mounts 0.21
262 TestNetworkPlugins/group/kubenet 5.1
270 TestNetworkPlugins/group/cilium 4.67
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-343954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-343954
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-587823 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-587823" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 22:02:01 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.150:8443
name: cert-expiration-523607
contexts:
- context:
cluster: cert-expiration-523607
extensions:
- extension:
last-update: Mon, 08 Jan 2024 22:02:01 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-523607
name: cert-expiration-523607
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-523607
user:
client-certificate: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/cert-expiration-523607/client.crt
client-key: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/cert-expiration-523607/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-587823

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-587823"

                                                
                                                
----------------------- debugLogs end: kubenet-587823 [took: 4.491337504s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-587823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-587823
--- SKIP: TestNetworkPlugins/group/kubenet (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-587823 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-587823" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-334768/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 22:02:01 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.150:8443
name: cert-expiration-523607
contexts:
- context:
cluster: cert-expiration-523607
extensions:
- extension:
last-update: Mon, 08 Jan 2024 22:02:01 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-523607
name: cert-expiration-523607
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-523607
user:
client-certificate: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/cert-expiration-523607/client.crt
client-key: /home/jenkins/minikube-integration/17866-334768/.minikube/profiles/cert-expiration-523607/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-587823

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-587823" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-587823"

                                                
                                                
----------------------- debugLogs end: cilium-587823 [took: 4.488643102s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-587823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-587823
--- SKIP: TestNetworkPlugins/group/cilium (4.67s)

                                                
                                    
Copied to clipboard